Topics: 64 *** Notes: 1,037

View Topic List
Android
+Fix Cleartext Traffic Error in Android 9 Pie (April 2, 2020, 1:57 a.m.)

1- Add a network security config file under res/xml:
res/xml/network_security_config.xml



2- Add a domain config and set cleartextTrafficPermitted to "true":
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">your_domain.com</domain>
</domain-config>
</network-security-config>



3- Add your network security config to your Android manifest file under application:
<application
android:name=".MyApplication"
android:networkSecurityConfig="@xml/network_security_config"
</application>

+AndroidX (Nov. 8, 2019, 7:07 p.m.)

AndroidX and the Android Support Library cannot live side-by-side in the same Android project - doing so will lead to build failures.

AndroidX (Jetpack) is the successor to the Android Support Library.

+AndroidX / Jetifier (Nov. 8, 2019, 7 p.m.)

android.useAndroidX: When set to true, this flag indicates that you want to start using AndroidX from now on. If the flag is absent, Android Studio behaves as if the flag were set to false.


android.enableJetifier: When set to true, this flag indicates that you want to have tool support (from the Android Gradle plugin) to automatically convert existing third-party libraries as if they were written for AndroidX. If the flag is absent, Android Studio behaves as if the flag were set to false.

--------------------------------------------------------------------------

Jetifier tool migrates support-library-dependent libraries to rely on the equivalent AndroidX packages instead. The tool lets you migrate an individual library directly, instead of using the Android Gradle plugin bundled with Android Studio.


For Example:

If you have PhotoView.java in your dependency. That uses support library AppCompatImageView.

` import android.support.v7.widget.AppCompatImageView;`

This class is moved now to androidx package, so how will PhotoView get androidx AppCompatImageView? And the app still runs in the device.

Who made this run?

Jetifier, which converts all support packages of dependency at build time.

Jetifier will convert android.support.v7.widget.AppCompatImageView to androidx.appcompat.widget.AppCompatImageView while building the project.
Conclusion

Enabling Jetifier is important when you migrate from Support Libraries to AndroidX.

--------------------------------------------------------------------------

+Animations (June 14, 2019, 2:36 p.m.)

https://android.googlesource.com/platform/frameworks/base/+/HEAD/core/res/res/anim

+Platform codenames, versions, API levels, and NDK releases (May 26, 2019, 11:01 p.m.)

Codename Version API level/NDK release
Pie 9 API level 28
Oreo 8.1.0 API level 27
Oreo 8.0.0 API level 26
Nougat 7.1 API level 25
Nougat 7.0 API level 24
Marshmallow 6.0 API level 23
Lollipop 5.1 API level 22
Lollipop 5.0 API level 21
KitKat 4.4 - 4.4.4 API level 19
Jelly Bean 4.3.x API level 18
Jelly Bean 4.2.x API level 17
Jelly Bean 4.1.x API level 16
Ice Cream Sandwich 4.0.3 - 4.0.4 API level 15, NDK 8
Ice Cream Sandwich 4.0.1 - 4.0.2 API level 14, NDK 7
Honeycomb 3.2.x API level 13
Honeycomb 3.1 API level 12, NDK 6
Honeycomb 3.0 API level 11
Gingerbread 2.3.3 - 2.3.7 API level 10
Gingerbread 2.3 - 2.3.2 API level 9, NDK 5
Froyo 2.2.x API level 8, NDK 4
Eclair 2.1 API level 7, NDK 3
Eclair 2.0.1 API level 6
Eclair 2.0 API level 5
Donut 1.6 API level 4, NDK 2
Cupcake 1.5 API level 3, NDK 1
(no codename) 1.1 API level 2
(no codename) 1.0 API level 1

+Action Bar, Toolbar, App Bar (May 26, 2019, 9:17 p.m.)

Toolbar is a generalization of the Action Bar pattern that gives you much more control and flexibility. Toolbar is a view in your hierarchy just like any other, making it easier to interleave with the rest of your views, animate it, and react to scroll events.

You can also set it as your Activity’s action bar, meaning that your standard options menu actions will be display within it.
In other words, the ActionBar now became a special kind of Toolbar.

The app bar, formerly known as the action bar in Android, is a special kind of toolbar that is used for branding, navigation, search, and actions.

--------------------------------------------------------------------

Toolbar provides greater control to customize its appearance unlike old ActionBar. It fully supported Toolbar features to lower android os devices via AppCompact support library.

Use a Toolbar as an replacement to ActionBar. In this you can still continued to use the ActionBar features such as menus, selections, etc.

Use a standalone Toolbar, whereever you want to place in your application.

--------------------------------------------------------------------

Toolbar’s are more flexible than ActionBar. We can easily modify its color, size and position. We can also add labels, logos, navigation icons and other views in it. In Material Design Android has updated the AppCompat support libraries so that we can use Toolbar’s in our devices running API Level 7 and up.

--------------------------------------------------------------------

+XML - Introduction (April 25, 2019, 1:44 p.m.)

XML describes the views in your activities, and Java tells them how to behave.

+Common naming conventions for icon assets (April 22, 2019, 4:02 a.m.)

Asset Type Prefix Example
Icons ic_ ic_star.png
Launcher icons ic_launcher ic_launcher_calendar.png
Menu icons and Action Bar icons ic_menu ic_menu_archive.png
Status bar icons ic_stat_notify ic_stat_notify_msg.png
Tab icons ic_tab ic_tab_recent.png
Dialog icons ic_dialog ic_dialog_info.png

+Android Studio - Transparent Background Launcher Icon (April 22, 2019, 2:51 a.m.)

1- File > New > Image Asset.

2- Turn to Launcher Icons (Adaptive and Legacy) in Icon Type.

3- Choose Image in Asset Type and select your picture inside Path field (Foreground Layer tab).

4- Create or download below a PNG file with transparent background of 512x512 px size (this is a size of ic_launcher-web.png).
PNG link: https://i.stack.imgur.com/Pwbuz.png

5- In Background Layer tab select Image in Asset Type and load the transparent background from step 4.

6- In Legacy tab select Yes for all Generate, None for Shape.

7- In Foreground Layer and Background Layer tabs you can change trim size.

Though you will see a black background behind the image in Preview window, after pressing Next, Finish and compiling an application you will see a transparent background in Android 5, Android 8.

+NDK (April 19, 2019, 6:38 p.m.)

The Native Development Kit (NDK) is a set of tools that allow you to use C and C++ code in your Android app. It provides platform libraries to manage native activities and access hardware components such as sensors and touch input.

The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for the following cases:

- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.

- Reuse code between your iOS and Android apps.

- Use libraries like FFMPEG, OpenCV, etc.

+SDK / NDK (April 19, 2019, 6:34 p.m.)

Software Development Kit (SDK)
Native Development Kit (NDK)


Traditionally, all Software Development Kit (SDK) were in C, very few in C++. Then Google comes along and releases a Java based library for Android and calls it a SDK.

However, then came the demand for C/C++ based library for development. Primarily from C/C++ developers aiming game development and some high performance apps.

So, Google released a C/C++ based library called Native Development Kit (NDK).

+ADB (Oct. 2, 2015, 5:04 p.m.)

apt install android-tools-adb android-tools-fastboot

+Android Development Environment (July 6, 2016, 11:58 a.m.)

Visit the following links to get information about the dependencies you might need for the SDK version you intend to download:

http://socialcompare.com/en/comparison/android-versions-comparison
http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels
https://cordova.apache.org/docs/en/latest/guide/platforms/android/

----------------------------------------------------------------------

You might find the tools and all the dependencies in the following links:

http://osgard.blogspot.com/2011/11/download-of-android-sdk-components.html
https://dl.zjuqsc.com/android/android-sdk-linux/
http://archive.virtapi.org/packages/a/android-sdk-build-tools/

----------------------------------------------------------------------

1- Create a folder preferably name it "android-sdk-linux" in any location.

2- Downloading SDK Tools:
From the following link, scroll to the bottom of the page, the table having the title "Command line tools only" and download the "Linux" package.
https://developer.android.com/studio/index.html
Extract the downloaded file "sdk-tools-linux.zip" to the folder you created in step 1.

3- Download an API level (for example, android-15_r03.zip or android-15.zip which is for Android 4.0.4).
Create a folder named "platforms" in "android-sdk-linux" and extract the downloaded file to it.

4- Download the latest version of `build-tools` (build-tools_r25-linux.zip).
Create a folder named `build-tools` in `android-sdk-linux` and extract it to it.
You need to rename the extracted folder to `25`.

5- Download the latest version of `platform-tools` (platform-tools_r23.0.1-linux.zip).
Extract it to the folder `android-sdk-linux`. It should have already a folder named `platform-tools`, so no need to create any further folders.

6- Open the file `~/.bashrc` and add the following line to it:
export ANDROID_HOME=/home/mohsen/Programs/Android/Development/android-sdk-linux

7- apt install openjdk-9-jdk
If you got errors like this:
\dpkg: warning: trying to overwrite '/usr/lib/jvm/java-9-openjdk-amd64/include/linux/jawt_md.h', which is also in package openjdk-9-jdk-headless

To solve the error:
apt-get -o Dpkg::Options::="--force-overwrite" install openjdk-9-jdk

----------------------------------------------------------------------

+AVD with HAXM or KVM (Emulators) (April 10, 2016, 9:25 a.m.)

Official Website:
https://software.intel.com/en-us/android/articles/intel-hardware-accelerated-execution-manager

--------------------------------------------------------

For a faster emulator, use the HAXM device driver.
Linux Link:
https://software.intel.com/en-us/blogs/2012/03/12/how-to-start-intel-hardware-assisted-virtualization-hypervisor-on-linux-to-speed-up-intel-android-x86-emulator

As described in the above link, Linux users need to use KVM.
Taken from the above website:
(Since Google mainly supports Android build on Linux platform (with Ubuntu 64-bit OS as top Linux platform, and OS X as 2nd), and a lot of Android Developers are using AVD on Eclipse or Android Studio hosted by a Linux system, it is very critical that Android developers take advantage of Intel hardware-assisted KVM virtualization for Linux just like HAXM for Windows and OS X.)

--------------------------------------------------------

KVM Installation:
https://help.ubuntu.com/community/KVM/Installation

1- egrep -c '(vmx|svm)' /proc/cpuinfo
If the output is 0 it means that your CPU doesn't support hardware virtualization.

2- apt install cpu-checker
Now you can check if your cpu supports kvm:
# kvm-ok

3- To see if your processor is 64-bit, you can run this command:
egrep -c ' lm ' /proc/cpuinfo
If 0 is printed, it means that your CPU is not 64-bit.
If 1 or higher, it is.
Note: lm stands for Long Mode which equates to a 64-bit CPU.

4- Now see if your running kernel is 64-bit:
uname -m

5- apt install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils ia32-libs-multiarch
If a screen with `Postfix Configuration` was displayed, ignore it by selecting `No Configuration`.

6- Next is to add your <username> account to the group kvm and libvirtd
sudo adduser mohsen kvm
sudo adduser mohsen libvirtd

7-Verify Installation:
You can test if your install has been successful with the following command:
sudo virsh -c qemu:///system list
Your screen will paint the following below if successful:
Id Name State

----------------------------------------------------

8- Install Java:
Oracle java has to be installed in order to run Android emulator x86 system Images.
sudo apt-get install openjdk-8-jre

9- Download a System Image from the following link:
http://mirrors.neusoft.edu.cn/android/repository/sys-img/android/
Create a folder named `system-images` in `android-sdk-linux` and extract the downloaded system image in it. (You might need to create another folder inside, named `default`.)
Run the Android SDK Manager, you will probably see the system image under `Extras` which is broken.
If it was so, for solving the problem, you need to download its API from this link and extract it in `platforms` folder:
http://downloads.puresoftware.org/files/android/API/

9- Start the AVD from Android SDK Directly from Terminal and create a Virtual Device:
~/Programs/Android/Development/android-sdk-linux/tools/android avd

--------------------------------------------------------

Angular
+AsyncSubject (Oct. 27, 2019, 7:07 p.m.)

AsyncSubject

This is very different from the others. With AsyncSubject, you’re only wanting the very last value as the subject completes. Figure that we have a bunch of values that can be sent out potentially, but you’re only interested in the most up to date value.

AsyncSubject emits the last value and only the last value to subscribers when the sequence of data that’s being sent out is actually completed.

-----------------------------------------------------------

While the BehaviorSubject and ReplaySubject both store values, the AsyncSubject works a bit different. The AsyncSubject is aSubject variant where only the last value of the Observable execution is sent to its subscribers, and only when the execution completes.

-----------------------------------------------------------

+ReplaySubject (Oct. 27, 2019, 7:04 p.m.)

ReplaySubject:

As BehaviorSubject, ReplaySubject can also replay the last value that was sent out to any new subscribers. The difference is it can also replay all of the previous values if you like. You can think of this as kind of caching any data that’s been sent out so that any other components that subscribe still can get that data.

With ReplaySubject we can replay everything that was previously sent.

-----------------------------------------------------------

The ReplaySubject is comparable to the BehaviorSubject in the way that it can send “old” values to new subscribers. It however has the extra characteristic that it can record a part of the observable execution and therefore store multiple old values and “replay” them to new subscribers.

-----------------------------------------------------------

When creating the ReplaySubject you can specify how much values you want to store and for how long you want to store them. In other words you can specify: “I want to store the last 5 values, that have been executed in the last second prior to a new subscription”.

-----------------------------------------------------------

+BehaviorSubject (Oct. 27, 2019, 7 p.m.)

BehaviorSubject:

BehaviorSubject is very similar to Subject, except that it has one big feature that Subject doesn’t have. The ability for subscribers that come in later in the flow to still get some of the previous data.

BehaviorSubject allows you send the last piece of data to any new observers, any new subscribers. In that way they can still stay in sync. They’re not going to have all the previous values, but at least they have the latest value.

-----------------------------------------------------------

The BehaviorSubject has the characteristic that it stores the “current” value. This means that you can always directly get the last emitted value from the BehaviorSubject.

-----------------------------------------------------------

There are two ways to get this last emited value. You can either get the value by accessing the .value property on the BehaviorSubject or you can subscribe to it. If you subscribe to it, the BehaviorSubject will directly emit the current value to the subscriber. Even if the subscriber subscribes much later than the value was stored.

-----------------------------------------------------------

+RxJS (ReactiveX) (Oct. 27, 2019, 6:48 p.m.)

RxJS is a library for composing asynchronous and event-based programs by using observable sequences. It provides one core type, the Observable, satellite types (Observer, Schedulers, Subjects) and operators inspired by Array#extras (map, filter, reduce, every, etc) to allow handling asynchronous events as collections.


ReactiveX combines the Observer pattern with the Iterator pattern and functional programming with collections to fill the need for an ideal way of managing sequences of events.

--------------------------------------------------------

The essential concepts in RxJS which solve async event management are:

- Observable: represents the idea of an invokable collection of future values or events.
- Observer: is a collection of callbacks that knows how to listen to values delivered by the Observable.
- Subscription: represents the execution of an Observable, is primarily useful for cancelling the execution.
- Operators: are pure functions that enable a functional programming style of dealing with collections with operations like map, filter, concat, flatMap, etc.
- Subject: is the equivalent to an EventEmitter, and the only way of multicasting a value or event to multiple Observers.
- Schedulers: are centralized dispatchers to control concurrency, allowing us to coordinate when computation happens on e.g. setTimeout or requestAnimationFrame or others.

--------------------------------------------------------

Examples:


Normally you register event listeners.

var button = document.querySelector('button');
button.addEventListener('click', () => console.log('Clicked!'));



Using RxJS you create an observable instead.

var button = document.querySelector('button');
Rx.Observable.fromEvent(button, 'click')
.subscribe(() => console.log('Clicked!'));

--------------------------------------------------------

+Subjects (Oct. 27, 2019, 1:52 p.m.)

Subject provides a way to send one or more data values to listeners.

With Subject we send data to subscribed observers, but any previously emitted data is not going to be sent as you subscribed later. You’re only going to get the data that occurs after you’ve subscribed.

-----------------------------------------------------------

A Subject is like an Observable. It can be subscribed to, just like you normally would with Observables. It also has methods like next(), error() and complete() just like the observer you normally pass to your Observable creation function.

The main reason to use Subjects is to multicast. An Observable by default is unicast. Unicasting means that each subscribed observer owns an independent execution of the Observable.

-----------------------------------------------------------

Subjects are used for multicasting Observables. This means that Subjects will make sure each subscription gets the exact same value as the Observable execution is shared among the subscribers. You can do this using the Subject class. But rxjs offers different types of Subjects, namely: BehaviorSubject, ReplaySubject and AsyncSubject.

-----------------------------------------------------------

import * as Rx from "rxjs";

const observable = Rx.Observable.create((observer) => {
observer.next(Math.random());
});

// subscription 1
observable.subscribe((data) => {
console.log(data); // 0.24957144215097515 (random number)
});

// subscription 2
observable.subscribe((data) => {
console.log(data); // 0.004617340049055896 (random number)
});

-----------------------------------------------------------

How to use Subjects to multicast:
Multicasting is a characteristic of a Subject. You don’t have to do anything special to achieve this behaviour.


import * as Rx from "rxjs";

const subject = new Rx.Subject();

// subscriber 1
subject.subscribe((data) => {
console.log(data); // 0.24957144215097515 (random number)
});

// subscriber 2
subject.subscribe((data) => {
console.log(data); // 0.24957144215097515 (random number)
});

subject.next(Math.random());

-----------------------------------------------------------

+Observables (Oct. 27, 2019, 1:59 p.m.)

Angular uses observables extensively in the event system and the HTTP service.

Observables provide the support for passing the messages between publishers (Creator of Observables) and subscribers (User of Observables) in your application.

Observables are declarative, that is, you define the function for publishing values, but it is not executed until the consumer subscribes to it.

-------------------------------------------------------

Define Angular Observers:

The handler for receiving the observable notifications implements the Observer interface. It is an object that defines the callback methods to handle the three types of notifications that an observable can send. These are the following.

- next: Required. The handler for each delivered value called zero or more times after execution starts.
- error: Optional. The handler for error notification. The error halts the execution of the observable instance.
- complete: Optional. The handler for an execution-complete notification. The delayed values can continue to be delivered to a next handler after execution is complete.

-------------------------------------------------------

+ECMAScript(ES) (Oct. 27, 2019, 1:56 p.m.)

ECMAScript is a simple standard for JavaScript and adding new features to JavaScript.

ECMAScript is a subset of JavaScript.

JavaScript is basically ECMAScript at its core but builds upon it.

Languages such as ActionScript, JavaScript, JScript all use ECMAScript as its core

As a comparison, AS/JS/JScript are 3 different cars, but they all use the same engine… each of their exteriors is different though, and there have been several modifications done to each to make it unique.

+Sort array of objects (Oct. 6, 2019, 8:26 a.m.)

this.menus.sort((obj1, obj2) => {
return obj1.ordering - obj2.ordering;
});

+Forms (Oct. 2, 2019, 10:52 p.m.)

Angular provides two different approaches for managing the forms:
1- Reactive approach (or Model-driven forms)
2-Template-driven approach

------------------------------------------------------------------------

Both reactive and template-driven forms share underlying common building blocks which are the following.

1- FormControl: It tracks the value and validation status of the individual form control.
2- FormGroup: It tracks the same values and status for the collection of form controls.
3- FormArray: It tracks the same values and status for the array of the form controls.
4- ControlValueAccessor: It creates the bridge between Angular FormControl instances and native DOM elements.

------------------------------------------------------------------------

Reactive forms:
Reactive forms or Model-driven forms are more robust, scalable, reusable, and testable. If forms are the key part of your application, or you’re already using reactive patterns for building your web application, use reactive forms.

In Reactive Forms, most of the work is done in the component class.

------------------------------------------------------------------------

Template-driven forms:
Template-driven forms are useful for adding the simple form to an app, such as the email list signup form. They’re easy to add to a web app, but they don’t scale as well as the reactive forms.

If you have the fundamental form requirements and logic that can be managed solely in the template, use template-driven forms.

In template-driven forms, most of the work is done in the template.

------------------------------------------------------------------------

FormControl:
It tracks the value and validity status of an angular form control. It matches to an HTML form control like an input.

this.username = new FormControl('agustin', Validators.required);

------------------------------------------------------------------------

FormGroup:
It tracks the value and validity state of a FormBuilder instance group. It aggregates the values of each child FormControl into one object, using the name of each form control as the key.
It calculates its status by reducing the statuses of its children. If one of the controls inside a group is invalid, the entire group becomes invalid.

this.user_data = new FormGroup({
username: new FormControl('agustin', Validators.required),
city: new FormControl('Montevideo', Validators.required)
});

------------------------------------------------------------------------

FormArray:
It is a variation of FormGroup. The main difference is that its data gets serialized as an array, as opposed to being serialized as an object in case of FormGroup. This might be especially useful when you don’t know how many controls will be present within the group, like in dynamic forms.

this.user_data = new FormArray({
new FormControl('agustin', Validators.required),
new FormControl('Montevideo', Validators.required)
});

------------------------------------------------------------------------

FormBuilder:
It is a helper class that creates FormGroup, FormControl and FormArray instances for us. It basically reduces the repetition and clutter by handling details of form control creation for you.

this.validations_form = this.formBuilder.group({
username: new FormControl('', Validators.required),
email: new FormControl('', Validators.compose([
Validators.required,
Validators.pattern('^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+$')
]))
});

------------------------------------------------------------------------

+Material Design (Aug. 31, 2019, 9:54 a.m.)

ng add @angular/material

-------------------------------------------------------------------------

https://material.angular.io/guide/getting-started

Colors:
https://material.io/archive/guidelines/style/color.html#color-color-system


Using a pre-built theme:
https://material.angular.io/guide/theming


Material Design Icons:
https://google.github.io/material-design-icons/

+Libraries / Packages (Aug. 31, 2019, 3:42 a.m.)

Bootstrap:
npm install bootstrap jquery popper.js


Material Design:
npm install --save @angular/material @angular/cdk @angular/animations @angular/flex-layout material-design-icons hammerjs


Misc:
npm install rxjs-compat --save
npm install ng2-slim-loading-bar @angular/core --save

+Angular Releases (Aug. 31, 2019, 2:15 a.m.)

https://angular.io/guide/releases#support-policy-and-schedule

+CLI commands (June 28, 2019, 7:39 p.m.)

Display list of available commands:
ng


ng new project_name


ng --version


npm install bootstrap jquery popper.js --save


ng serve -o
ng serve --watch


ng g c product-add --skipTests=true


ng build --prod

+Install / Update Angular CLI (June 28, 2019, 7:33 p.m.)

Angular CLI helps us to create projects, generate application and library code, and perform a variety of ongoing development tasks such as testing, bundling, and deployment.

First, install Nodejs using my Nodejs notes, then:
sudo npm install -g @angular/cli

Ansible
+Common Options (May 16, 2018, 3:06 p.m.)

--ask-su-pass

Ask for su password (deprecated, use become)

------------------------------------------------------------

--ask-sudo-pass

Ask for sudo password (deprecated, use become)

------------------------------------------------------------

--become-user

Run operations as this user (default=root)

------------------------------------------------------------

--list-hosts

Outputs a list of matching hosts; does not execute anything else

------------------------------------------------------------

--list-tasks

List all tasks that would be executed

------------------------------------------------------------

--private-key, --key-file

Use this file to authenticate the connection

------------------------------------------------------------

--start-at-task <START_AT_TASK>

Start the playbook at the task matching this name

------------------------------------------------------------

--step

One-step-at-a-time: confirm each task before running

------------------------------------------------------------

--syntax-check

Perform a syntax check on the playbook, but do not execute it

------------------------------------------------------------

-C, --check

Don’t make any changes; instead, try to predict some of the changes that may occur

------------------------------------------------------------

-D, --diff

When changing (small) files and templates, show the differences in those files; works great with –check

------------------------------------------------------------

-K, --ask-become-pass

Ask for privilege escalation password

------------------------------------------------------------

-S, --su

Run operations with su (deprecated, use become)

------------------------------------------------------------

-b, --become

Run operations with become (does not imply password prompting)

------------------------------------------------------------

-e, --extra-vars

Set additional variables as key=value or YAML/JSON, if filename prepend with @

------------------------------------------------------------

-f <FORKS>, --forks <FORKS>

Specify number of parallel processes to use (default=5)

------------------------------------------------------------

-i, --inventory, --inventory-file

Specify inventory host path (default=[[u’/etc/ansible/hosts’]]) or comma separated host list. –inventory-file is deprecated

------------------------------------------------------------

-k, --ask-pass

Ask for connection password

------------------------------------------------------------

-u <REMOTE_USER>, --user <REMOTE_USER>

Connect as this user (default=None)

------------------------------------------------------------

-v, --verbose

Verbose mode (-vvv for more, -vvvv to enable connection debugging)

------------------------------------------------------------

+Display output to console (May 16, 2018, 4:40 p.m.)

Every ansible task when run can save its results into a variable. To do this you have to specify which variable to save the results in, using "register" parameter.

Once you save the value to a variable you can use it later in any of the subsequent tasks. So for example if you want to get the standard output of a specific task you can write the following:

ansible-playbook ansible/postgres.yml -e delete_old_backups=true

---
- hosts: localhost
tasks:
- name: Delete old database backups
command: echo '{{ delete_old_backups }}'
register: out
- debug:
var: out.stdout_lines

-----------------------------------------------------------------

You can also use -v when running ansible-playbook.

-----------------------------------------------------------------

+Pass conditional boolean value (May 16, 2018, 4:53 p.m.)

- name: Delete old database backups
command: echo {{ delete_old_backups }}
when: delete_old_backups|bool

+Basic Commands (Jan. 7, 2017, 11:54 a.m.)

ansible test_servers -m ping

-----------------------------------------------------

ansible-playbook playbook.yml

ansible-playbook playbook.yml --check

-----------------------------------------------------

ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2

ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"

-----------------------------------------------------

+Inventory File (Jan. 7, 2017, 11:04 a.m.)

[postgres_servers]
mohsenhassani.com ansible_user=root
pythonist.ir ansible_user=mohsen
exam.myedu.ir:2020

--------------------------------------------------------

[webservers]
www[01:50].example.com

[databases]
db-[a:f].example.com

[targets]
localhost ansible_connection=local
other1.example.com ansible_connection=ssh ansible_user=mpdehaan
other2.example.com ansible_connection=ssh ansible_user=mdehaan

--------------------------------------------------------

Host Variables:

[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909

--------------------------------------------------------

Group Variables:

[atlanta]
host1
host2

[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com

--------------------------------------------------------

Groups of Groups, and Group Variables:

It is also possible to make groups of groups using the :children suffix. Just like above, you can apply variables using :vars:

[atlanta]
host1
host2

[raleigh]
host2
host3

[southeast:children]
atlanta
raleigh

[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2

[usa:children]
southeast
northeast
southwest
northwest

--------------------------------------------------------

+Installation (Dec. 13, 2016, 4:33 p.m.)

sudo apt-get install libffi-dev libssl-dev python-pip python-setuptools
pip install ansible
pip install markupsafe

Apache
+Auth Types (Oct. 14, 2019, midnight)

# Backward compatibility with apache 2.2
Order allow,deny
Allow from all

# Forward compatibility with apache 2.4
Require all granted
Satisfy Any

-----------------------------------------------------------

<IfVersion < 2.4>
Allow from all
</IfVersion>
<IfVersion >= 2.4>
Require all granted
</IfVersion>

-----------------------------------------------------------

+Installation (Sept. 6, 2017, 11:11 a.m.)

For Debian earlier than Stretch:
apt-get install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

For Debian Stretch:
apt-get install apache2 apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

+Password Protect via .htaccess (Feb. 26, 2017, 6:14 p.m.)

1- Create a file named `.htaccess` in the root of website, with this content:

AuthName "Deskbit's Support"
AuthUserFile /etc/apache2/.htpasswd
AuthType Basic
require valid-user
-----------------------------------------------------
2- htpasswd -c /etc/apache2/.htpasswd mohsen
-----------------------------------------------------
3- Add this to <Directory> block:

<Directory /var/www/support/>
Options Indexes FollowSymLinks
AllowOverride ALL
Require all granted
</Directory>
-----------------------------------------------------
4- Restart apache
/etc/init.d/apache2 restart
-----------------------------------------------------

+Configs for two different ports on same IP (Sept. 26, 2016, 10:07 p.m.)

NameVirtualHost *:80
<VirtualHost *:80>
ServerAdmin mohsen@mohsenhassani.com
#ServerName ecc.mohsenhassani.com
ServerName 93.118.96.41
ServerAlias ecc.mohsenhassani.com
LogLevel warn
ErrorLog /home/mohsen/logs/eccgroup_error.log
WSGIScriptAlias / /home/mohsen/websites/ecc/ecc/wsgi.py
WSGIDaemonProcess ecc python-path=/home/mohsen/websites/ecc:/home/mohsen/virtualenvs/django-1.10/lib/python3.4/site-packages
WSGIProcessGroup ecc

Alias /static /home/mohsen/websites/ecc/ecc/static
<Directory /home/mohsen/websites/ecc/ecc/static>
Require all granted
</Directory>

<Directory />
Require all granted
</Directory>
</VirtualHost>

------------------------------------------------------------------
Listen 8081
NameVirtualHost *:8081
<VirtualHost *:8081>
ServerName 93.118.96.41
ServerAdmin mohsen@mohsenhassani.com

ErrorLog /var/log/apache2/freepbx.error.log
CustomLog /var/log/apache2/freepbx.access.log combined
DocumentRoot /var/www/html

<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>

+Error Check (March 4, 2015, 12:06 p.m.)

sudo systemctl status apache2.service -l

# tail -f /var/log/apache2/error.log

+VirtualHost For Django Sites (March 4, 2015, 10:34 a.m.)

For Centos:
1- yum install mod_wsgi httpd httpd-devel

-----------------------------------------------------------------

For Debian:

2- Create a virtual host:
sudo nano /etc/apache2/sites-available/mydomain.com.conf
OR
sudo nano /etc/httpd/conf.d/mydomain.com.conf

-----------------------------------------------------------------

3- Create your new virtual host node which should look something like this:

<VirtualHost *:80>
ServerName 192.168.92.241
DocumentRoot /srv/mpei
WSGIScriptAlias / /srv/mpei/mpei/wsgi.py

LogLevel info
ErrorLog /var/log/mpei_error.log

WSGIDaemonProcess mpei processes=2 threads=15 python-path=/var/www/.virtualenvs/django-1.7/lib/python3.4/site-packages
# WSGISocketPrefix /var/run/wsgi

Alias /media/ /srv/mpei/mpei/media/
Alias /static/ /srv/mpei/mpei/static/

<Directory /srv/mpei/mpei/static>
# For Apache 2.2
Allow from all

# For Apache 2.4
Require all granted
</Directory>

<Directory /srv/mpei/mpei/media>
# For Apache 2.2
Allow from all

# For Apache 2.4
Require all granted
</Directory>

<Directory /srv/mpei/mpei>
<Files wsgi.py>
# For Apache 2.2
Order deny,allow
Allow from all

# For Apache 2.4
Require all granted
</Files>
</Directory>
</VirtualHost>

-----------------------------------------------------------------

4- Edit the wsgi.py file within the main app of your project:
import os
import sys

# Add the app's directory to the PYTHONPATH
sys.path.append('/srv/mpei/')
sys.path.append('/srv/mpei/mpei/')

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mpei.settings")

# Activate your virtualenv
activate_env=os.path.expanduser("/var/www/.virtualenvs/django-1.7/bin/activate_this.py")
exec(open(activate_env).read())

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()

-----------------------------------------------------------------

5- Enable the virtual host (For Debian):
a2ensite site.mysite.com.conf

-----------------------------------------------------------------

6- If you want to disable a site, you can run a2dissite site.mysite.com.conf

-----------------------------------------------------------------

Compiling wsgi_mod

If you're using another version of python, you'll need to compile mod_wsgi from source to match your virtual env.

1- Download the latest version from the following website:
https://pypi.org/project/mod-wsgi/#files

2- Untar it, CD to the folder, and:
sudo ./configure --with-python=/usr/local/bin/python3.6
sudo LD_RUN_PATH=/usr/local/lib make
sudo make install

It will get replaced by the one, which you had probably installed via Linux package manager, and solves any probable import errors.

-----------------------------------------------------------------

Serving the admin files:

cd /srv/mpei/mpei/static/
ln -s /var/www/.virtualenvs/django-1.7/lib/python3.4/site-packages/django/contrib/admin/static/admin .

-----------------------------------------------------------------

For debuggig use the ErrorLog directive in the above apache config:
tail -f /var/log/mpei_error.log

-----------------------------------------------------------------

Listen 8000
WSGISocketPrefix /run/wsgi
<VirtualHost *:8000>
ServerName 192.168.88.50
DocumentRoot /srv/mpei
WSGIScriptAlias / /srv/mpei/mpei/wsgi.py

LogLevel info
ErrorLog /var/log/mpei_error.log

WSGIDaemonProcess mpei processes=2 threads=15 python-path=/srv/.virtualenvs/django-1.7/lib/python3.4/site-packages
WSGIProcessGroup mpei


Alias /media/ /srv/mpei/mpei/media/
Alias /static/ /srv/mpei/mpei/static/

<Directory /srv/mpei/mpei/static>
Allow from all
</Directory>

<Directory /srv/mpei/mpei/media>
Allow from all
</Directory>

<Directory /srv/mpei/mpei>
<Files wsgi.py>
Require all granted
</Files>
</Directory>

Alias /recordings /var/spool/asterisk/
<Directory /var/spool/asterisk/>
Require all granted
Options Indexes FollowSymlinks
</Directory>
</VirtualHost>

-----------------------------------------------------------------

Asterisk
+Apache config files (Jan. 5, 2015, 4:51 p.m.)

Contents of file: /etc/apache2/sites-enabled/000-default.conf

<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html

ScriptAlias /cgi-bin/ /var/cgi-bin/
<Directory "/var/cgi-bin">
AllowOverride All
Options None
Order allow,deny
Allow from all
</Directory>


ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
-----------------------------------------------------------------------------
Create a file named .htaccess in the /var/cgi-bin with this content.
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /var/cgi-bin/.htpasswd
Require user mohsen
-----------------------------------------------------------------------------
htpasswd -c /etc/apache2/.htpasswd mohsen
And enter a desired password to create the password file.
-----------------------------------------------------------------------------

+Creating /etc/init.d/asterisk (Jan. 5, 2015, 2:08 p.m.)

1-cp asterisk-13.1.0/contrib/init.d/rc.debian.asterisk /etc/init.d/asterisk

2-Change the lines to these values:
DAEMON=/usr/sbin/asterisk
ASTVARRUNDIR=/var/run/asterisk
ASTETCDIR=/etc/asterisk


If you run it right now, you will get the error:
Restarting asterisk (via systemctl): asterisk.serviceFailed to restart asterisk.service: Unit asterisk.service failed to load: No such file or directory.
failed!

I restarted the server (reboot) and after booting up it was run successfully (/etc/init.d/asterisk start)

+Perl Packages/Libraries for Debian (Jan. 2, 2015, 12:29 p.m.)

Before starting installation, be careful that, you will need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
--------------------------------------------------------------------------------
1-apt-get install libghc-ami-dev

2-Install this file `dpkg --install libasterisk-ami-perl_0.2.8-1_all.deb`
If you don't have it, refer to the following link for creating this .deb file
http://www.debian-administration.org/article/78/Building_Debian_packages_of_Perl_modules

3-Copy the codecs binary `codec_g729-ast130-gcc4-glibc2.2-x86_64-core2.so` to the path `/usr/lib/asterisk/modules`
Rename it to `code_g729.so` and based on other modules in this directory, set the chmod and chown of the file.
You can find it from this link: http://asterisk.hosting.lv/

+Running Asterisk as a Service (Dec. 15, 2014, 2:44 p.m.)

The most common way to run Asterisk in a production environment is as a service. Asterisk includes both a make target for installing Asterisk as a service, as well as a script - live_asterisk - that will manage the service and automatically restart Asterisk in case of errors.

Asterisk can be installed as a service using the make config target:
# make config
/etc/rc0.d/K91asterisk -> ../init.d/asterisk
/etc/rc1.d/K91asterisk -> ../init.d/asterisk
/etc/rc6.d/K91asterisk -> ../init.d/asterisk
/etc/rc2.d/S50asterisk -> ../init.d/asterisk
/etc/rc3.d/S50asterisk -> ../init.d/asterisk
/etc/rc4.d/S50asterisk -> ../init.d/asterisk
/etc/rc5.d/S50asterisk -> ../init.d/asterisk
Asterisk can now be started as a service:
# service asterisk start
* Starting Asterisk PBX: asterisk [ OK ]
And stopped:
# service asterisk stop
* Stopping Asterisk PBX: asterisk [ OK ]
And restarted:
# service asterisk restart
* Stopping Asterisk PBX: asterisk [ OK ]
* Starting Asterisk PBX: asterisk [ OK ]

+Executing as another User (Dec. 15, 2014, 2:42 p.m.)

Do not run as root
Running Asterisk as root or as a user with super user permissions is dangerous and not recommended. There are many ways Asterisk can affect the system on which it operates, and running as root can increase the cost of small configuration mistakes.

Asterisk can be run as another user using the -U option:
# asterisk -U asteriskuser

Often, this option is specified in conjunction with the -G option, which specifies the group to run under:
# asterisk -U asteriskuser -G asteriskuser

When running Asterisk as another user, make sure that user owns the various directories that Asterisk will access:
# sudo chown -R asteriskuser:asteriskuser /usr/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/spool/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/log/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/run/asterisk
# sudo chown asteriskuser:asteriskuser /usr/sbin/asterisk

+Commands (Dec. 15, 2014, 12:59 p.m.)

You can get a CLI (Command Line Interface) console to an already-running daemon by typing
asterisk -r
Another description for option '-r':
In order to connect to a running Asterisk process, you can attach a remote console using the -r option
------------------------------
To disconnect from a connected remote console, simply hit Ctrl+C.
------------------------------
To shut down Asterisk, issue:
core stop gracefully
------------------------------
There are three common commands related to stopping the Asterisk service. They are:
core stop now - This command stops the Asterisk service immediately, ending any calls in progress.
core stop gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk stops.
core stop when convenient - This command waits until Asterisk has no calls in progress, and then it stops the service. It does not prevent new calls from entering the system.

There are three related commands for restarting Asterisk as well.
core restart now - This command restarts the Asterisk service immediately, ending any calls in progress.
core restart gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk restarts.
core restart when convenient - This command waits until Asterisk has no calls in progress, and then it restarts the service. It does not prevent new calls from entering the system.

There is also a command if you change your mind.
core abort shutdown - This command aborts a shutdown or restart which was previously initiated with the gracefully or when convenient options.
------------------------------
sip show peers - returns a list of chan_sip loaded peers
voicemail show users - returns a list of app_voicemail loaded users
core set debug 5 - sets the core debug to level 5 verbosity.
------------------------------
core show version
------------------------------
asterisk -h : Help. Run '/sbin/asterisk -h' to get a list of the available command line parameters.
asterisk -C <configfile>: Starts Asterisk with a different configuration file than the default /etc/asterisk/asterisk.conf.
-f : Foreground. Starts Asterisk but does not fork as a background daemon.
-c : Enables console mode. Starts Asterisk in the foreground (implies -f), with a console command line interface (CLI) that can be used to issue commands and view the state of the system.
-r : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon.
-R : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon and attempts to reconnect if disconnected.
-t : Record soundfiles in /var/tmp and move them where they belong after they are done.
-T : Display the time in "Mmm dd hh:mm:ss" format for each line of output to the CLI.
-n : Disable console colorization (for use with -c or -r)
-i: Prompt for cryptographic initialization passcodes at startup.
-p : Run as pseudo-realtime thread. Run with a real-time priority. (Whatever that means.)
-q : Quiet mode (supress output)
-v : Increase verbosity (multiple v's = more verbose)
-V : Display version number and exit.
-d : Enable extra debugging across all modules.
-g : Makes Asterisk dump core in the case of a segmentation violation.
-G <group> : Run as a group other than the caller.
-U <user> : Run as a user other than the caller
-x <cmd> : Execute command <cmd> (only valid with -r)
------------------------------

+Installation (Dec. 14, 2014, 9:36 p.m.)

Before starting installation, be careful that, you need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
--------------------------------------------------------------------------------
Install these libraries first:
1-apt-get install libapache2-mod-auth-pgsql libanyevent-perl odbc-postgresql unixODBC unixODBC-dev libltdl-dev

2-Download the file asterisk-13-current.tar.gz from this link: http://downloads.asterisk.org/pub/telephony/asterisk/
a) Untar it.
You will need this untarred asterisk file in the following steps.

----------- Building and Installing pjproject -----------
1-Using the link http://www.pjsip.org/release/2.3/ download pjproject-2.3.tar.bz2

a) Untar and CD to the pjproject

b) ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'

c) make dep

d) make

e) make install

f) ldconfig

Now, for checking if you have successfully installed pjproject and asterisk detects the libraries, untar and CD to asterisk directory (I know you have not installed it yet, just move to the folder now :D), and enter the following command:

g) apt-get install libjansson-dev uuid-dev snmpd libperl-dev libncurses5-dev libxml2-dev libsqlite3-dev

*** important ***
Before continuing to next step, you have to know that based on needs of Shetab company you need to enable `res_snmp` module. For enabling it you need to install `net-snmp_5.4.3`, and since it's not in the Synaptic, you have to install it from the source:
1-Download it from: https://launchpad.net/debian/+source/net-snmp/5.4.3~dfsg-2.8+deb7u1
2-Install it using ./configure, make and make install
*** End of important ***

h) ./configure --without-pwlib (If you don't use this --without switch, you will get the following error, even if you have installed those ptlib package already!)
Cannot find ptlib-config - please install and try again

i) make menuselect

j) Browse to the eleventh category `Resource Modules` and make sure the `res_snmp` module at the bottom of the list is checked. Using escape key exit the menu and continue with installing asterisk.

----------- Building and Installing Asterisk -----------
2- Make sure you are still in the asterisk directory).

c) make
I got so many errors surrounded by '**************' (so many asterisks) telling me these modules were needed:
res_curl, res_odbcm, res_crypto, res_config_curl ... (and so many more) I just installed postgresql and the command `make` continued working with no errors!

d) make install

e) make samples

f) make progdocs

Now continue installation process with Perl packages from my tutorials.
After that, refer to `Creating /etc/init.d/asterisk` in my tutorials.

Beautiful Soup
+Remove tags from an element (March 7, 2020, 5:44 p.m.)

comments = soup.findAll('div', {'class': 'cmnt-text'})
for comment in comments:
print(comment.get_text())

+Methods (March 7, 2020, 5:24 p.m.)

comment = soup.find('div', {'class': 'comment-user'})

print(type(comment))
<class 'bs4.element.Tag'>

--------------------------------------------------------------------------

comment = soup.findAll('div', {'class': 'comment-user'})

print(type(comment))
<class 'bs4.element.ResultSet'>

--------------------------------------------------------------------------

question = soup.find('p', {'itemprop': 'text'}).text

--------------------------------------------------------------------------

image_url = image_tag.find('img').get('src')

--------------------------------------------------------------------------

soup.find('app-comment-list')

--------------------------------------------------------------------------

comment_boxes = comments_placeholder.findAll('app-comment')

--------------------------------------------------------------------------

comment = comment_box.find('p', {'class': 'text', 'itemprop': 'text'})

--------------------------------------------------------------------------

+Usages (March 7, 2020, 5:10 p.m.)

From local file:

soup = BeautifulSoup(open('source.html'), 'html.parser')
comments = soup.find('app-comment-list')
print(comments)

-----------------------------------------------------------------------------------

From URL:

response = requests.get(url='URL')
comments = soup.find('app-comment-list')
print(comments)

-----------------------------------------------------------------------------------

From URL pass data as POST to URL:


data = {'from_post': 1, 'to_post': 100)
response = requests.get(url='URL', json=data)
comments = soup.find('app-comment-list')
print(comments)


-----------------------------------------------------------------------------------

From URL using requests and proxy:

params = {
'timeout': 20,
'verify': False,
'proxies': {'https': 'https://192.168.1.17:8080'},
'url': URL,
'json': {}
}

response = requests.get(**params)
comments = soup.find('app-comment-list')
print(comments)

-----------------------------------------------------------------------------------

+Installation (March 7, 2020, 5:09 p.m.)

pip install beautifulsoup4

or

apt-get install python3-bs4

BIND
+PTR Record (Aug. 19, 2018, 7:59 p.m.)

A Pointer (PTR) record resolves an IP address to a fully-qualified domain name (FQDN) as an opposite to what A record does. PTR records are also called Reverse DNS records.

PTR records are mainly used to check if the server name is actually associated with the IP address from where the connection was initiated.

IP addresses of all Intermedia mail servers already have PTR records created.

--------------------------------------------------------------

What is PTR Record?

PTR records are used for the Reverse DNS (Domain Name System) lookup. Using the IP address you can get the associated domain/hostname. An A record should exist for every PTR record. The usage of a reverse DNS setup for a mail server is a good solution.

While in the domain DNS zone the hostname is pointed to an IP address, using the reverse zone allows pointing an IP address to a hostname.
In the Reverse DNS zone, you need to use a PTR Record. The PTR Record resolves the IP address to a domain/hostname.

--------------------------------------------------------------

+Errors (Aug. 7, 2015, 3:31 p.m.)

managed-keys-zone ./IN: loading from master file managed-keys.bind

For solving it:
nano /etc/bind/named.conf
add include "/etc/bind/bind.keys";

And also create an empty file:
touch /etc/bind/managed-keys.bind
**********************************************************
When working with the Reverse DNS (rev.10.168.192.in-addre.arpa), and the zone file (mohsenhassani.ir.db) you can use the tool:
named-checkzone mohsenhassani.ir rev.10.168.192.in-addr.arpa
named-checkzone mohsenhassani.ir mohsenhassani.ir.db
to check the validity of the files.

+Configuration (Aug. 21, 2014, 12:48 p.m.)

This file contains a summary of my own experiences:

1-There are some default zones in "/etc/bind/named.conf.external-zones"; no need to change them, neither to exclude it from the file "/etc/bind/named.conf"
---------------------------------------------------------------------------------------------
2-Add a line at the bottom of the file "/etc/bind/named.conf":
"include "/etc/bind/named.conf.external-zones";
--------------------------------------------------------------------------------------------
3-Create a file named "/etc/bind/named.conf.external-zones" and fill it up with:
// -------------- Begin mohsenhassani.ir --------------
zone "mohsenhassani.ir" {
type master;
file "/etc/bind/zones/mohsenhassani.ir.db";
};

zone "1.10.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/1.10.168.192.in-addr.arpa";
};
// -------------- End mohsenhassani.ir --------------


// -------------- Begin shahbal.ir --------------
zone "shahbal.ir" {
type master;
file "/etc/bind/zones/shahbal.ir.db";
};

zone "2.10.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/2.10.168.192.in-addr.arpa";
};
// -------------- End shahbal.ir --------------
--------------------------------------------------------------------------------------------
4-There is an empty directory in "/etc/bind/zones/". This the place for holding the data for above paths. So create a file named "mohsenhassani.ir.db" and fill it up with:
$TTL 3h
@ IN SOA ns.mohsenhassani.ir. a.b.com. (
2013020828
20m
15m
1w
1h
)

IN NS ns.mohsenhassani.ir.

ns IN A 199.26.84.20
@ IN A 199.26.84.20
---------------------------------------------------------------------------------
5-Repeat the earlier step with different file name and data. I mean create a file named "1.10.168.192.in-addr.arpa" in "/zones/" and fill it up with:

$TTL 3h
@ IN SOA mohsenhassani.ir. mail.mohsenhassani.ir. (
3
15m
15m
1w
1h )

; main domain name servers
IN NS mohsenhassani.ir.
IN NS www.mohsenhassani.ir.
IN NS sites.mohsenhassani.ir.
; main domain mail servers
IN MX 10 mail.mohsenhassani.ir.
; A records for name servers above
IN A 192.69.200.153
www IN A 192.69.200.153
pania IN A 192.69.200.153
; A record for mail server above
mail IN A 192.69.200.153
---------------------------------------------------------------------------------------------
6- OK, Done!
When I was done doing this configurations, I was testing my work with "dig mohsenhassani.ir" but I got error like:

root@mohsenhassani:/home/mohsen# dig mohsenhassani.ir
; <<>> DiG 9.7.3 <<>> mohsenhassani.ir
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;mohsenhassani.ir. IN A

;; Query time: 383 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sat Mar 16 17:00:19 2013
;; MSG SIZE rcvd: 34


In the line which is like ";; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929"
The word "SERVFAIL" shows that I have errors; There are many many many reasons which might cuase this error, and you may solve the error with its id.
Anyway for this error I had to do this:
sudo nano /etc/resolv.conf
And add:
127.0.0.1 to first line.
It had already 8.8.8.8 and 4.4.4.4

Then doing "dig mohsenhassani.ir" there was no more errors:
root@mohsenhassani:/home/mohsen# dig mohsenhassani.ir

; <<>> DiG 9.7.3 <<>> mohsenhassani.ir
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39792
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;mohsenhassani.ir. IN A

;; ANSWER SECTION:
mohsenhassani.ir. 10800 IN A 192.69.200.153

;; AUTHORITY SECTION:
mohsenhassani.ir. 10800 IN NS ns.mohsenhassani.ir.

;; ADDITIONAL SECTION:
ns.mohsenhassani.ir. 10800 IN A 192.69.200.153

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat Mar 16 17:02:26 2013
;; MSG SIZE rcvd: 83
-----------------------------------------------------------------------------
Oh! And you have to create two sub-domains named "ns1.mohsenhassani.COM" and "ns2.mohsenhassani.COM" so that you can forward the ".ir" domains to these sub-domains.

+Installation (Aug. 7, 2015, 4:22 p.m.)

http://jack-brennan.com/caching-dns-with-bind9-on-debian/
---------------------------------------------------------------------------------------------
apt-get install bind9 bind9utils
---------------------------------------------------------------------------------------------
Configuration:

When installing and configuring or restarting bind, in case of encountering errors, check the log files. The log files are not stored separately. BIND stores the logs in the syslog:
nano /var/log/syslog
***************************************************
1-nano /etc/bind/named.conf.options
We need to modify the forwarder. This is the DNS server to which your own DNS will forward the requests he cannot process.

forwarders {
# Replace the address below with the address of your provider's DNS server
88.135.34.227;
};
*******************************************
2-Add this line to the file: /etc/bind/named.conf
include "/etc/bind/named.conf.external-zones";
******************************************************
3-nano /etc/bind/named.conf.external-zones
This is where we will insert our zones. By the way, a zone is a domain name that is referenced in the DNS server.

// -------------- Begin mohsenhassani.ir --------------
zone "mohsenhassani.ir" {
type master;
file "/etc/bind/zones/mohsenhassani.ir.db";
};

zone "1.10.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/1.10.168.192.in-addr.arpa";
};
// -------------- End mohsenhassani.ir --------------
**********************************************
4-nano /etc/bind/zones/1.10.168.192.in-addr.arpa
$TTL 3h
@ IN SOA mohsenhassani.ir. mail.mohsenhassani.ir. (
3
15m
15m
1w
1h )


@ IN NS mohsenhassani.ir.
@ IN A 192.69.204.35
**********************************************************
5-Restart BIND:
sudo /etc/init.d/bind9 restart

in case of failing, check the errors:
nano /var/log/syslog

We can now test the new DNS server...
*******************************************************
Modify the file resolv.conf with the following settings:
sudo nano /etc/resolv.conf

enter the following:

search example.com
nameserver 4.4.4.4
nameserver 8.8.8.8
***********************************************************
Now, test your DNS:
dig example.com

In case of errors, refer to errors in BIND category

+Description (Aug. 21, 2014, 12:45 p.m.)

Every system on the Internet must have a unique IP address. (This does not include systems that are behind a NAT firewall because they are not directly on the Internet.) DNS acts as a directory service for all of these systems, allowing you to specify each one by its hostname. A telephone book allows you to look up an individual person by name and get their telephone number, their unique identifier on the telephone system's network. DNS allows you to look up individual server by name and get its IP address, its unique identifier on the Internet.
There are other hostname-to-IP directory services in use, mainly for LANs. Windows LANs can use WINS. UNIX LANs can use NIS. But because DNS is the directory service for the Internet (and can also be used for LANs) it is the most widely used. UNIX LANs could always use DNS instead of NIS, and starting with Windows 2000 Server, Windows LANs could use DNS instead of, or in addition to, WINS. And on small LANs where there are only a few machines you could just use HOSTS files on each system instead of setting up a server running DNS, NIS, or WINS.

As a service, DNS is critical to the operation of the Internet. When you enter www.some-domain.com in a Web browser, it's DNS that takes the www host name and translates it to an IP address. Without DNS, you could be connected to the Internet just fine, but you ain't goin' no where. Not unless you keep a record of the IP addresses of all of the resources you access on the Internet and use those instead of host/domain names.

So when you visit a Web site, you are actually doing so using the site's IP address even though you specified a host and domain name in the URL. In the background your computer quickly queried a DNS server to get the IP address that corresponds to the Web site's server and domain names. Now you know why you have to specify one or two DNS server IP addresses in the TCP/IP configuration on your desktop PC (in the resolv.conf file on a Linux system and the TCP/IP properties in the Network Control Panel on Windows systems).

A "cannot connect" error doesn't necessarily indicate there isn't a connection to the destination server. There may very well be. The error may indicate a failure in "resolving" the domain name to an IP address. I use the open-source Firefox Web browser on Windows systems because the status bar gives more informational messages like "Resolving host", "Connecting to", and "Transferring data" rather than just the generic "Opening page" with IE. (It also seems to render pages faster than IE.)

In short, always check for correct DNS operation when troubleshooting a problem involving the inability to access an Internet resource. The ability to resolve names is critical, and later in this page, we'll show you some tools you can use to investigate and verify this ability.
When you are surfing the Web viewing Web pages or sending an e-mail your workstation is sending queries to a DNS server to resolve server/domain names. (Back on the Modems page we showed you how to set up your resolv.conf file to do this.) When you have your own Web site that other people visit you need a DNS server to respond to the queries from their workstations.

When you visit Web sites, the DNS server your workstation queries for name resolution is typically run by your ISP, but you could have one of your own. When you have your own Web site the DNS servers which respond to visitors' queries are typically run by your Web hosting provider, but you could likewise have your own one of these too. Actually, if you set up your own DNS server it could be used to respond to both "internal" (from your workstation) and "external" (from your Web site's visitors) queries.

Even if you don't have your own domain name or even your own LAN, you can still benefit from using a DNS server to allow others to access your Debian system. If you have a single system connected to the Internet via a cable or DSL connection, you can have it act as a Web/e-mail/FTP server using a neat service called "dynamic DNS" which we'll cover later. Dynamic DNS will even work with a modem if you want to play around with it.

DNS Server Functions:
You can set up a DNS server for several different reasons:
Internet Domain Support: If you have a domain name and you're operating Web, e-mail, FTP, or other Internet servers, you'll use a DNS server to respond to resolution queries so others can find and access your server(s). This is a serious undertaking and you'd have to set up a minimum of two of them. On this page, we'll refer to these types of DNS servers as authoritative DNS servers for reasons you'll see later. However, there are alternatives to having your own authoritative DNS server if you have (or want to have) your own domain name. You can have someone else host your DNS records for you. Even if someone else is taking care of your domain's DNS records you could still set up one of the following types of DNS servers.

Local Name Resolution: Similar to the above scenario, this type of DNS server would resolve the hostnames of systems on your LAN. Typically in this scenario, there is one DNS server and it does both jobs. The first being that it receives queries from workstations and the second being that it serves as the authoritative source for the responses (this will be more clear as we progress). Having this type of DNS server would eliminate the need to have (and manually update) a HOSTS file on each system on your LAN. On this page, we'll refer to these as LAN DNS servers.

During the Debian installation, you are asked to supply a domain name. This is an internal (private) domain name that is not visible to the outside world so like the private IP address ranges you use on a LAN, it doesn't have to be registered with anyone. A LAN DNS server would be authoritative for this internal, private domain. For security reasons, the name for this internal domain should not be the same as any public domain name you have registered. Private domain names are not restricted to using one of the established public TLD (Top Level Domain) names such as .com or .net. You could use .corp or .inc or anything else for your TLD. Since a single DNS server can be authoritative for multiple domains, you could use the same DNS server for both your public and private domains. However, the server would need to be accessible from both the Internet and the LAN so you'd need to locate it in a DMZ. Though you want to use different public and private domain names, you can use the same name for the second-level domain. For example, my-domain.com for the public name and my-domain.inc for the private name.


Internet Name Resolution: LAN workstations and other desktop PCs need to send Internet domain name resolution queries to a DNS server. The DNS server most often used for this is the ISP's DNS servers. These are often the DNS servers you specify in your TCP/IP configuration. You can have your own DNS server respond to these resolution queries instead of using your ISP's DNS servers. My ISP recently had a problem where they would intermittently lose connectivity to the network segment that their DNS servers were connected to so they couldn't be contacted. It took me about 30 seconds to turn one of my Debian systems into this type of DNS server and I was surfing with no problems. On this page, we'll refer to these as simple DNS servers. If a simple DNS server fails, you could just switch back to using your ISP's DNS servers. As a matter of fact, given that you typically specify two DNS servers in the TCP/IP configuration of most desktop PCs, you could have one of your ISP's DNS servers listed as the second (fallback) entry and you'd never miss a beat if your simple DNS server did go down. Turning your Debian system into a simple DNS server is simply a matter of entering a single command.

Don't take from this that you need three different types of DNS servers. If you were to set up a couple of authoritative DNS servers they could also provide the functionality of LAN and simple DNS servers. And a LAN DNS server can simultaneously provide the functionality of a simple DNS server. It's a progressive type of thing.

If you were going to set up authoritative DNS servers or a simple DNS server you'd have to have a 24/7 broadband connection to the Internet. Naturally, a LAN DNS server that didn't resolve Internet host/domain names wouldn't need this.

A DNS server is just a Debian system running a DNS application. The most widely used DNS application is BIND (Berkeley Internet Name Domain) and it runs a daemon called named that, among other things, responds to resolution queries. We'll see how to install it after we cover some basics.

DNS Basics:
Finding a single server out of all of the servers on the Internet is like trying to find a single file on the drive with thousands of files. In both cases, it helps to have some hierarchy built into the directory to logically group things. The DNS "namespace" is hierarchical in the same type of upside-down tree structure seen with file systems. Just as you have the root of a partition or drive, the DNS namespace has a root which is signified by a period.

Namespace Root --> Top Level Domains --> Second Level Domains
Namesapce Root: .
Top Level Domains: com, net, org
Second Level Domains: com --> aboutdebian, cnn, net --> sbc, org --> samba, debian

When specifying the absolute path to a file in a file system you start at the root and go to the file:
/etc/bind/named.conf

When specifying the absolute path to a server in the DNS namespace you start at the server and go to the root:
www.aboutdebian.com.

Note that period after the 'com' as it's important. It's how you specify the root of the namespace. An absolute path in the DNS namespace is called an FQDN (Fully Qualified Domain Name). The use of FQDNs is prevalent in DNS configuration files and it's important that you always use that trailing period.

Internet resources are usually specified by a domain name and a server hostname. The www part of a URL is often the hostname of the Web server (or it could be an alias to a server with a different hostname). DNS is basically just a database with records for these hostnames. The directory for the entire telephone system is not stored in one huge phone book. Rather, it is broken up into many pieces with each city having and maintaining, its piece of the entire directory in its phone book. By the same token, pieces of the DNS directory database (the "zones") are stored, and maintained, on many different DNS servers located around the Internet. If you want to find the telephone number for a person in Poughkeepsie, you'd have to look in the Poughkeepsie telephone book. If you want to find the IP address of the www server in the some-domain.com domain, you'd have to query the DNS server that stores the DNS records for that domain.

The entries in the database map a host/domain name to an IP address. Here is a simple logical view of the type of information that is stored (we'll get to the A, CNAME, and MX designations in a bit).

A www.their-domain.com 172.29.183.103
MX mail.their-domain.com 172.29.183.217
A debian.your-domain.com 10.177.8.3
CNAME www.your-domain.com 10.177.8.3
MX debian.your-domain.com 10.177.8.3

This is why a real Internet server needs a static (unchanging) IP address. The IP address of the server's NIC connected to the Internet has to match whatever address is in the DNS database. Dynamic DNS does provide a way around this for home servers however, which we'll see later.

When you want to browse to www.their-domain.com your DNS server (the one you specify in the TCP/IP configuration on your desktop computer) most likely won't have a DNS record for the their-domain.com domain so it has to contact the DNS server that does. When your DNS server contacts the DNS server that has the DNS records (referred to as "resource records" or "zone records") for their-domain.com your DNS server gets the IP address of the www server and relays that address back to your desktop computer. So which DNS server has the DNS records for a particular domain?

When you register a domain name with someone like Network Solutions, one of the things they ask you for are the server names and addresses of two or three "name servers" (DNS servers). These are the servers where the DNS records for your domain will be stored (and queried by the DNS servers of those browsing to your site). So where do you get the "name servers" information for your domain? Typically, when you host your Web site using a Web hosting service they not only provide a Web server for your domain's Web site files but they will also provide a DNS server to store your domain's DNS records. In other words, you'll want to know who your Web hosting provider is going to be before you register a domain name (so you can enter the provider's DNS server information in the name servers section of the domain name registration application).

You'll see the term "zone" used in DNS references. Most of the time a zone just equates to a domain. The only times this wouldn't be true is if you set up subdomains and set up separate DNS servers to handle just those subdomains. For example, a company would set up the subdomains us.their-domain.com and europe.their-domain.com and would "delegate" a separate DNS server to each one of them. In the case of these two DNS servers, their zone would be just the subdomains. The zone of the DNS server for the parent their-domain.com (which would contain the servers www.their-domain.com and mail.their-domain.com) would only contain records for those few machines in the parent domain.

Note that in the above example "us" and "Europe" are subdomains while "www" and "mail" are hostnames of servers in the parent domain.

Once you've got your Web site up and running on your Web hosting provider's servers and someone surf's to your site, the DNS server they specified in their local TCP/IP configuration will query your hosting provider's DNS servers to get the IP address for your Web site. The DNS servers that host the DNS records for your domain, i.e. the DNS servers you specify in your domain name registration application, are the authoritative DNS servers for your domain. The surfer's DNS server queries one of your site's authoritative DNS servers to get an address and gets an authoritative response. When the surfer's DNS server relays the address information back to the surfer's local PC it is a "non authoritative" response because the surfer's DNS server is not an authoritative DNS server for your domain.

Example: If you surf to MIT's Web site the DNS server you have specified in your TCP/IP configuration queries one of MIT's authoritative DNS servers and gets an authoritative response with the IP address for the 'www' server. Your DNS server then sends a non-authoritative response back to your PC. You can easily see this for yourself. At a shell prompt, or a DOS window on a newer Windows system, type in:

nslookup www.mit.edu

First, you'll see the name and IP address of your locally-specified DNS server. Then you'll see the non-authoritative response your DNS server sent back containing the name and IP address of the MIT Web server.

If you're on a Linux system you can also see which name server(s) your DNS server contacted to get the IP address. At a shell prompt type in:

whois mit.edu

and you'll see three authoritative name servers listed with the hostnames STRAWB, W20NS, and BITSY. The 'whois' command simply returns the contents of a site's domain record.


DNS Records and Domain Records

Don't confuse DNS zone records with domain records. Your domain record is created when you fill out a domain name registration application and is maintained by the domain registration service (like Network Solutions) you used to register the domain name. A domain only has one domain record and it contains administrative and technical contact information as well as entries for the authoritative DNS servers (aka "name servers") that are hosting the DNS records for the domain. You have to enter the hostnames and addresses for multiple DNS servers in your domain record for redundancy (fail-over) purposes.

DNS records (aka zone records) for a domain are stored in the domain's zone file on the authoritative DNS servers. Typically, it is stored on the DNS servers of whatever Web hosting service is hosting your domain's Web site. However, if you have your own Web server (rather than using a Web hosting service) the DNS records could be hosted by you using your own authoritative DNS servers (as in MIT's case), or by a third party like EasyDNS.

In short, the name servers you specified in your domain record host the domain's zone file containing the zone records. The name servers, whether they be your Web hosting provider's, those of a third party like EasyDNS, or your own, which host the domain's zone file are authoritative DNS servers for the domain.

Because DNS is so important to the operation of the Internet, when you register a domain name you must specify a minimum of two name servers. If you set up your own authoritative DNS servers for your domain you must set up a minimum of two of them (for redundancy) and these would be the servers you specify in your domain record. While the multiple servers you specify in your domain record are authoritative for your domain, only one DNS server can be the primary DNS server for a domain. Any others are "secondary" servers. The zone file on the primary DNS server is "replicated" (transferred) to all secondary servers. As a result, any changes made to DNS records must be made on the primary DNS server. The zone files on secondary servers are read-only. If you made changes to the records in a zone file on a secondary DNS server they would simply be overwritten at the next replication. As you will see below, the primary server for a domain and the replication frequency are specified in a special type of zone record.

Early on in this page, we said that the DNS zone records are stored in a DNS database which we now know is called a zone file. The term "database" is used quite loosely. The zone file is actually just a text file that you can edit with any text editor. A zone file is domain-specific. That is, each domain has its own zone file. Actually, there are two zone files for each domain but we're only concerned with one right now. The DNS servers for a Web hosting provider will have many zone files, two for each domain it's hosting zone records for. A zone "record" is, in most cases, nothing more than a single line in the text zone file.

There are different types of DNS zone records. These numerous record types give you flexibility in setting up the servers in your domain. The most common types of zone records are:

An A (Address) record is a "host record" and it is the most common type. It is simply a static mapping of a hostname to an IP address. A common hostname for a Web server is 'www' so the A record for this server gives the IP address for this server in the domain.

An MX (Mail eXchanger) record is specifically for mail servers. It's a special type of service-specifier record. It identifies a mail server for the domain. That's why you don't have to enter a hostname like 'www' in an e-mail address. If you're running Sendmail (mail server) and Apache (Web server) on the same system (i.e. the same system is acting as both your Web server and e-mail server), both the A record for the system and the MX record would refer to the same server.

To offer some fail-over protection for e-mail, MX records also have a Priority field (numeric). You can enter two or three MX records each pointing to a different mail server, but the server specified in the record with the highest priority (lowest number) will be chosen first. A mail server with a priority of 10 in the MX record will receive an e-mail before a server with a priority of 20 in its MX record. Note that we are only talking about receiving mail from other Internet mail servers here. When a mail server is sending mail, it acts as a desktop PC when it comes to DNS. The mail server looks at the domain name in the recipient's e-mail address and the mail server then contacts its local DNS server (specified in the resolv.conf file) to get the IP address for the mail server in the recipient's domain. When an authoritative DNS server for the recipient's domain receives the query from the sender's DNS server it sends back the IP addresses from the MX records it has in that domain's zone file.

A CNAME (Canonical Name) record is an alias record. It's a way to have the same physical server to respond to two different hostnames. Let's say you're not only running Sendmail and Apache on your server, but you're also running WU-FTPD so it also acts as an FTP server. You could create a CNAME record with the alias name 'FTP' so people would use ftp.your-domain.com and www.your-domain.com to access different services on the same server.

Another use for a CNAME record was illustrated in the example near the top of the page. Suppose you name your Web server 'Debian' instead of 'www'. You could simply create a CNAME record with the alias name 'www' but with the hostname 'Debian' and Debian's IP address.

NS (Name Server) records specify the authoritative DNS servers for a domain.

There can multiples of all of the above record types. There is one special record type of which there is only one record in the zone file. That's the SOA (Start Of Authority) record and it's the first record in the zone file. An SOA record is only present in a zone file located on authoritative DNS servers (non-authoritative DNS servers can cache zone records). It specifies such things as:

The primary authoritative DNS server for the zone (domain).
The e-mail address of the zone's (domain's) administrator. In zone files, the '@' has a specific meaning (see below) so the e-mail address is written as me.my-domain.com.

Timing information as to when secondary DNS servers should refresh or expire a zone file and a serial number to indicate the version of the zone file for the sake of comparison.

The SOA record is the one that takes up several lines.

Several important points to note about the records in a zone file:

Records can specify servers in other domains. This is most commonly used with MX and NS records when backup servers are located in a different domain but receive mail or resolve queries for your domain.

There must be an A record for systems specified in all MX, NS, and CNAME records.

A and CNAME records can specify workstations as well as servers (which you'll see when we set up a LAN DNS server).

Now let's look at a typical zone file. When a Debian system is set up as a DNS server the zone files are stored in the /etc/bind directory. In a zone file, the two parentheses around the timer values act as line-continuation characters as does the '\' character at the end of the second line. The ';' is the comment character. The 'IN' indicates an INternet-class record.

$TTL 86400
my-name.com. IN SOA debns1.my-name.com. \
joe.my-name.com. {
2004011522 ; Serial no., based on date
21600 ; Refresh after 6 hours
3600 ; Retry after 1 hour
604800 ; Expire after 7 days
3600 ; Minimum TTL of 1 hour
)
;Name servers
debns1 IN A 192.168.1.41
debns2.joescuz.com. IN A 192.168.1.42

@ IN NS debns1
my-name.com. IN NS debns2.my-name.com.


;Mail servers
debmail1 IN A 192.168.1.51
debmail2.my-name.com. IN A 192.168.1.52

@ IN MX 10 debmail1
my-name.com. IN MX 20 debmail2.my-name.com.


;Aliased servers
debhp IN A 192.168.1.61
debdell.my-name.com. IN A 192.168.1.62

www IN CNAME debhp
ftp.my-name.com. IN CNAME debdell.my-name.com.


Source: http://www.aboutdebian.com/dns.htm

Celery
+Django Celery with django-celery-results extension (Nov. 11, 2016, 10:37 a.m.)

pip install celery
pip install django_celery_results
pip install django_celery_beat

------------------------------------------------

# project/project/celery.py

from __future__ import absolute_import, unicode_literals
import os

from celery import Celery


os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()


@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))

------------------------------------------------

# project/project/__init__.py

from __future__ import absolute_import, unicode_literals

from .celery import app as celery_app


__all__ = ['celery_app']

------------------------------------------------

project/project/tasks.py

from __future__ import absolute_import

from celery import shared_task


@shared_task
def begin_ping():
return 'hi'

------------------------------------------------

# settings.py

INSTALLED_APPS = (
'celery',
'django_celery_results',
'django_celery_beat',
)

CELERY_RESULT_BACKEND = 'django-db'

------------------------------------------------

python manage.py migrate django_celery_results
python manage.py migrate django_celery_beat

------------------------------------------------

apt install rabbitmq-server
For running it:
rabbitmq-server

------------------------------------------------

Run these two commands in separated activated virtualenvs:
celery -A project beat -l info -S django
celery -A project worker -l info

The "celery -A project beat -l info -S django" is for "DatabaseScheduler" which gets the schedules from Django admin panel.
You can use "celery -A project beat -l info" which is for "PersistentScheduler" which gets the schedules from scripts in the tasks.

For having the schedules from Admin panel, refer to the link "Intervals" and define a suitable interval.
Then follow the link "Periodic tasks" and select the defined interval in the "Interval" dropdown list.

------------------------------------------------

+Celery and RabbitMQ with Django (Oct. 14, 2018, 9:54 a.m.)

1- pip install Celery

--------------------------------------------------------------

2- apt-get install rabbitmq-server

--------------------------------------------------------------

3- Enable and start the RabbitMQ service
systemctl enable rabbitmq-server
systemctl start rabbitmq-server

--------------------------------------------------------------

4- Add configuration to the settings.py file:
CELERY_BROKER_URL = 'amqp://localhost'

--------------------------------------------------------------

5- Create a new file named celery.py in your app:
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')

app = Celery('mysite')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

--------------------------------------------------------------

6- Edit the __init__.py file in the project root:

from .celery import app as celery_app

__all__ = ['celery_app']

--------------------------------------------------------------

7- Create a file named tasks.py inside a Django app:

from celery import shared_task

@shared_task
def my_task(x, y):
return x, y

--------------------------------------------------------------

8- In views.py

from .tasks import my_task

my_task.delay(x, y)

Instead of calling the "my_task" directly, we call my_task.delay(). This way we are instructing Celery to execute this function in the background.

--------------------------------------------------------------

9- Starting The Worker Process:

Open a new terminal tab, and run the following command:
celery -A mysite worker -l info

--------------------------------------------------------------

+Periodic Tasks from tasks.py (Oct. 14, 2018, 10:24 a.m.)

import datetime
from celery.task import periodic_task


@periodic_task(run_every=datetime.timedelta(minutes=5))
def myfunc():
print 'periodic_task'

+Periodic Tasks from settings.py (Oct. 14, 2018, 10:53 a.m.)

CELERYBEAT_SCHEDULE = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=30),
'args': (16, 16)
},
}

+Running tasks in shell (Oct. 11, 2018, 10:49 a.m.)

celery -A project_name beat

celery -A cdr worker -l info

+Daemon Scripts (Sept. 29, 2015, 11:39 a.m.)

These scripts are needed when you want to run the worker as a daemon.

The first is used for seeing the output of running tasks. For example, I had something printed in the console, from within the task, and I could see the output (the printed string) in this terminal.

The second is for firing up / starting the tasks.


1- Create a file /etc/supervisor/conf.d/celeryd.conf with this content:
[program:celery]
; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery worker -A cdr --loglevel=INFO

directory=/home/mohsen/websites/cdr/
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown. Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it send SIGKILL to its whole process group instead, taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

--------------------------------------------------------------------------------------------

2- Create a file /etc/supervisor/conf.d/celerybeat.conf with this content:

[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery beat -A cdr

; remove the -A myapp argument if you are not using an app instance

directory=/home/mohsen/websites/cdr/
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/beat.log
stderr_logfile=/var/log/celery/beat.log
autostart=true
autorestart=true
startsecs=10

; if rabbitmq is supervised, set its priority higher so it starts first
priority=999

Ceph
+RBD (Oct. 30, 2017, 10:01 a.m.)

rbd is a utility for manipulating rados block device (RBD) images, used by the Linux rbd driver and the rbd storage driver for Qemu/KVM. RBD images are simple block devices that are striped over objects and stored in a RADOS object store. The size of the objects the image is striped over must be a power of two.
-------------------------------------------------------------
rbd -p image ls

rbd -p image info Windows7x8

rbd -p image rm Win7x86WithApps

rbd export --pool=image disk_user01_2 /root/Windows7x86.qcow2

The "2" is the ID of the Template in deskbit admin panel.
-------------------------------------------------------------

+Changing a Monitor’s IP address (Sept. 19, 2017, 4:42 p.m.)

http://docs.ceph.com/docs/kraken/rados/operations/add-or-rm-mons/
-----------------------------------------------------------------------
ceph mon getmap -o /tmp/a

monmaptool --print /tmp/a

monmaptool --rm vdiali /tmp/a

monmaptool --add vdiali 10.10.1.121 /tmp/a

monmaptool --print /tmp/a

systemctl stop ceph-mon*

ceph-mon -i vdimohsen --inject-monmap /tmp/a

Change IP in the following files:
/etc/network/interfaces
/etc/default/avalaunch
/etc/ceph/ceph.conf
/etc/hosts

+Properly remove an OSD (Aug. 23, 2017, 12:35 p.m.)

Sometimes removing OSD, if not done properly can result in double rebalancing. The best practice to remove an OSD involves changing the crush weight to 0.0 as first step.

$ ceph osd crush reweight osd.<ID> 0.0

Then you wait for rebalancing to be completed. Eventually completely remove the OSD:

$ ceph osd out <ID>
$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>
----------------------------------------------------------
From the docs:
Remove an OSD

To remove an OSD from the CRUSH map of a running cluster, execute the following:
ceph osd crush remove {name}

For getting the name:
ceph osd tree

+Errors - undersized+degraded+peered (July 4, 2017, 5:25 p.m.)

http://mohankri.weebly.com/my-interest/single-host-multiple-osd
---------------------------------------------------------
ceph osd crush rule create-simple same-host default osd

ceph osd pool set rbd crush_ruleset 1
---------------------------------------------------------

+Commands (July 3, 2017, 3:53 p.m.)

ceph osd tree

ceph osd dump

ceph osd lspools

ceph osd pool ls

ceph osd pool get rbd all

ceph osd pool set rbd size 2

ceph osd crush rule ls
-----------------------------------------------------
ceph-osd -i 0

ceph-osd -i 0 --mkfs --mkkey
-----------------------------------------------------
ceph -w

ceph -s

ceph health detail
-----------------------------------------------------
ceph-disk activate /var/lib/ceph/osd/ceph-0

ceph-disk list

chown ceph:disk /dev/sda1 /dev/sdb1
-----------------------------------------------------
ceph-mon -f --cluster ceph --id vdi --setuser ceph --setgroup ceph
-----------------------------------------------------
systemctl -a | grep ceph

systemctl status ceph-osd*

systemctl status ceph-mon*

systemctl enable ceph-mon.target
-----------------------------------------------------
rbd -p image ls

rbd export --pool=image disk_win_7 /root/win7.img
-----------------------------------------------------
cd /var/lib/ceph/osd/
ceph-2 ceph-3 ceph-8


mount
mount | grep -i vda
mount | grep -i vdb
mount | grep -i vdc
mount | grep ceph

fdisk -l

mount /dev/vdc1 ceph-3/

systemctl restart ceph-osd@3
ceph osd tree
********************************
systemctl restart ceph-osd@5

mount | grep -i ceph


systemctl restart ceph-osd@5
Job for ceph-osd@5.service failed because the control process exited with error code.
See "systemctl status ceph-osd@5.service" and "journalctl -xe" for details.

systemctl daemon-reload
systemctl restart ceph-osd@5
ceph osd tree
ceph -w

-----------------------------------------------------

+ceph-ansible (Jan. 7, 2017, 10:58 a.m.)

https://github.com/ceph/ceph-ansible
---------------------------------------------------
0- apt-get update # Ensure you do this step before running ceph-ansible!!!

1- apt-get install libffi-dev libssl-dev python-pip python-setuptools sudo python-dev

git clone https://github.com/ceph/ceph-ansible/
---------------------------------------------------
2- pip install markupsafe ansible
---------------------------------------------------
3-Setup your Ansible inventory file:
[mons]
mohsen3.deskbit.local

[osds]
mohsen3.deskbit.local
---------------------------------------------------
4-Now enable the site.yml and group_vars files:

cp site.yml.sample site.yml

You need to copy all files within `group_vars` directory; omit the `.sample` part:
for f in *.sample; do cp "$f" "${f/.sample/}"; done
---------------------------------------------------
5-Open the file `group_vars/all.yml` for editing:

nano group_vars/all.yml

Uncomment the variable `ceph_origin` and replace `upstream` with `distro`:
ceph_origin: 'distro'

Uncomment and replace:
monitor_interface: eth0

Uncomment:
journal_size: 5120
---------------------------------------------------
6-Choosing a scenario:
Open the file `group_vars/osds.yml` and uncomment and set to `true` the following variables:

osd_auto_discovery: true
journal_collocation: true
---------------------------------------------------
7- Any needed configs for ceph should be added to the file `group_vars/all.yml`.
Uncomment and change:

ceph_conf_overrides:
global:
osd_pool_default_pg_num: 8
osd_pool_default_size: 1
---------------------------------------------------
Path to variables file:
/etc/ansible/playbooks/ceph/ceph-ansible/roles/ceph-common/templates/ceph.conf.j2
---------------------------------------------------

+Adding Monitors (Jan. 4, 2017, 2:13 p.m.)

A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.

Add two Ceph Monitors to your cluster.
-------------------------------------------
ceph-deploy mon add node2
ceph-deploy mon add node3
-------------------------------------------
Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum. You can check the quorum status by executing the following:

ceph quorum_status --format json-pretty
-------------------------------------------
When you run Ceph with multiple monitors, you SHOULD install and configure NTP on each monitor host. Ensure that the monitors are NTP peers.
-------------------------------------------

+Adding an OSD (Jan. 4, 2017, 2:08 p.m.)

1- mkdir /var/lib/ceph/osd/ceph-3

2- ceph-disk prepare /var/lib/ceph/osd/ceph-3

3- ceph-disk activate /var/lib/ceph/osd/ceph-3

4- Once you have added your new OSD, Ceph will begin rebalancing the cluster by migrating placement groups to your new OSD. You can observe this process with the ceph CLI:
ceph -w

You should see the placement group states change from active+clean to active with some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)

+Storage Cluster (Jan. 3, 2017, 3:10 p.m.)

To purge the Ceph packages, execute: (Used for when you want to purge data)
ceph-deploy purge node1


If at any point you run into trouble and you want to start over, execute the following to purge the configuration:
ceph-deploy purgedata node1
ceph-deploy forgetkeys
--------------------------------------------
1-Create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster:
mkdir my-cluster
cd my-cluster
--------------------------------------------
2-Create the cluster:
ceph-deploy new node1

Using `ls` command, you should see a Ceph configuration file, a monitor secret keyring, and a log file for the new cluster.
--------------------------------------------
3-Change the default number of replicas in the Ceph configuration file from 3 to 2 so that Ceph can achieve an active + clean state with just two Ceph OSDs. Add the following line under the [global] section:

osd pool default size = 2
osd_max_object_name_len = 256
osd_max_object_namespace_len = 64

These two last options are for EXT4; based on this link:
http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/
--------------------------------------------
4-Install Ceph:
ceph-deploy install node1

The ceph-deploy utility will install Ceph on each node.
--------------------------------------------
5-Add the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial

Once you complete the process, your local directory should have the following keyrings:

{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
--------------------------------------------
6-Add OSDs:
For fast setup, this quick start uses a directory rather than an entire disk per Ceph OSD Daemon.

See:
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd
for details on using separate disks/partitions for OSDs and journals.

Login to the Ceph Nodes and create a directory for the Ceph OSD Daemon.
ssh node2
sudo mkdir /var/local/osd0
exit

ssh node3
sudo mkdir /var/local/osd1
exit

Then, from your admin node, use ceph-deploy to prepare the OSDs.
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

Finally, activate the OSDs:
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
--------------------------------------------
7-Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin node1 node2

Login to nodes and ensure that you have the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph health
-------------------------------------------

+Ceph Node Setup (Jan. 3, 2017, 2:55 p.m.)

1-Create a user on each Ceph Node.
--------------------------------------------
2-Add sudo privileges for the user on each Ceph Node.
--------------------------------------------
3-Configure your ceph-deploy admin node with password-less SSH access to each Ceph Node.
ssh-keygen and ssh-copy-id
--------------------------------------------
4-Modify the ~/.ssh/config file of your ceph-deploy admin node so that it logs into Ceph Nodes as the user you created.
Host node1
Hostname node1
User root
Host node2
Hostname node2
User root
Host node3
Hostname node3
User root
--------------------------------------------
5-Add to /etc/hosts:
10.10.0.84 node1
10.10.0.85 node2
10.10.0.86 node3
10.10.0.87 node4
--------------------------------------------
6-Change the hostname of each node to the ones from the earlier stpe (node1, node2, node3, ...):
nano /etc/hostname
reboot each node
--------------------------------------------

+Acronyms (Jan. 1, 2017, 3:40 p.m.)

CRUSH: Controlled Replication Under Scalable Hashing
EBOFS: Extent and B-tree based Object File System
HPC: High-Performance Computing
MDS: MetaData Server
OSD: Object Storage Device
PG: Placement Group
PGP = Placement Group for Placement purpose
POSIX: Portable Operating System Interface for Unix
RADOS: Reliable Autonomic Distributed Object Store
RBD: RADOS Block Devices

+Ceph Deploy (Dec. 28, 2016, 12:51 p.m.)

Descriptions:
The admin node must be password-less SSH access to Ceph nodes. When ceph-deploy logs into a Ceph node as a user, that particular user must have passwordless sudo privileges.

We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. See Clock for details.

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server
------------------------------------------------------
For ALL Ceph Nodes perform the following steps:
sudo apt-get install openssh-server
------------------------------------------------------
Create a Ceph Deploy User:
The ceph-deploy utility must log into a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

We recommend creating a specific user for ceph-deploy on ALL Ceph nodes in the cluster. Please do NOT use “ceph” as the username. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, {productname}). The following procedure, substituting {username} for the username you define, describes how to create a user with passwordless sudo.

sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
------------------------------------------------------

------------------------------------------------------

+Installation (Dec. 27, 2016, 3:57 p.m.)

http://docs.ceph.com/docs/master/start/quick-start-preflight/
----------------------------------------------------
1- wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

2- echo deb https://download.ceph.com/debian-hammer/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

3- sudo apt-get install ceph ceph-deploy

+Definitions (Dec. 27, 2016, 1:10 p.m.)

Ceph:
Ceph is a storage technology.
-------------------------------------------------
Cluster:
A cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing.
-------------------------------------------------
Clustering vs. Clouding:
Cluster differs from Cloud and Grid in that a cluster is a group of computers connected by a local area network (LAN), whereas cloud is more wide scale and can be geographically distributed. Another way to put it is to say that a cluster is tightly coupled, whereas a cloud is loosely coupled. Also, clusters are made up of machines with similar hardware, whereas clouds are made up of machines with possibly very different hardware configurations.
-------------------------------------------------
Ceph Storage Cluster:
A distributed object store that provides storage of unstructured data for applications.
-------------------------------------------------
Ceph Object Gateway:
A powerful S3- and Swift-compatible gateway that brings the power of the Ceph Object Store to modern applications.
-------------------------------------------------
Ceph Block Device:
A distributed virtual block device that delivers high-performance, cost-effective storage for virtual machines and legacy applications.
-------------------------------------------------
Ceph File System:
A distributed, scale-out filesystem with POSIX semantics that provides storage for a legacy and modern applications.
-------------------------------------------------
RADOS:
A reliable, autonomous, distributed object store comprised of self-healing, self-managing intelligent storage nodes.
-------------------------------------------------
LIBRADOS:
A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP.
-------------------------------------------------
RADOSGW:
A bucket-based REST gateway, compatible with S3 and Swift.
-------------------------------------------------
RBD:
A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver.
-------------------------------------------------
Ceph FS:
A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE.
-------------------------------------------------
pg_num = number of placement groups mapped to an OSD
-------------------------------------------------
Placement Groups (PGs):

Ceph maps objects to placement groups. Placement groups are shards or fragments of a logical object pool that place objects as a group into OSDs. Placement groups reduce the amount of per-object metadata when Ceph stores the data in OSDs. A larger number of placement groups (e.g., 100 per OSD) leads to better balancing.
-------------------------------------------------

CSS
+Table - Cellpadding and cellspacing (Feb. 7, 2020, 11:48 a.m.)

table {
border-spacing: 10px;
border-collapse: separate;
}

----------------------------------------------------------

#table2 {
border-collapse: separate;
border-spacing: 15px 50px;
}

----------------------------------------------------------

+Remove href values when printing (July 2, 2019, 12:11 a.m.)

@media print {
a[href]:after {
visibility: hidden;
}
}

+Removing page title and date when printing (July 2, 2019, 12:09 a.m.)

@page {
size: auto;
margin: 0;
}

+Media Queries (Feb. 9, 2016, 12:05 p.m.)

@media all and (max-width: 480px) {

}


@media all and (min-width: 480px) and (max-width: 768px) {

}


@media all and (min-width: 768px) and (max-width: 1024px) {

}

@media all and (min-width: 1024px) {

}

/*------------------------------------------
Responsive Grid Media Queries - 1280, 1024, 768, 480
1280-1024 - desktop (default grid)
1024-768 - tablet landscape
768-480 - tablet
480-less - phone landscape & smaller
--------------------------------------------*/
@media all and (min-width: 1024px) and (max-width: 1280px) { }

@media all and (min-width: 768px) and (max-width: 1024px) { }

@media all and (min-width: 480px) and (max-width: 768px) { }

@media all and (max-width: 480px) { }

/*------------------------------------------
Foundation Media Queries
http://foundation.zurb.com/docs/media-queries.html
--------------------------------------------*/

/* Small screens - MOBILE */
@media only screen { } /* Define mobile styles - Mobile First */

@media only screen and (max-width: 40em) { } /* max-width 640px, mobile-only styles, use when QAing mobile issues */

/* Medium screens - TABLET */
@media only screen and (min-width: 40.063em) { } /* min-width 641px, medium screens */

@media only screen and (min-width: 40.063em) and (max-width: 64em) { } /* min-width 641px and max-width 1024px, use when QAing tablet-only issues */

/* Large screens - DESKTOP */
@media only screen and (min-width: 64.063em) { } /* min-width 1025px, large screens */

@media only screen and (min-width: 64.063em) and (max-width: 90em) { } /* min-width 1024px and max-width 1440px, use when QAing large screen-only issues */

/* XLarge screens */
@media only screen and (min-width: 90.063em) { } /* min-width 1441px, xlarge screens */

@media only screen and (min-width: 90.063em) and (max-width: 120em) { } /* min-width 1441px and max-width 1920px, use when QAing xlarge screen-only issues */

/* XXLarge screens */
@media only screen and (min-width: 120.063em) { } /* min-width 1921px, xlarge screens */

/*------------------------------------------*/



/* Portrait */
@media screen and (orientation:portrait) { /* Portrait styles here */ }
/* Landscape */
@media screen and (orientation:landscape) { /* Landscape styles here */ }


/* CSS for iPhone, iPad, and Retina Displays */

/* Non-Retina */
@media screen and (-webkit-max-device-pixel-ratio: 1) {
}

/* Retina */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min--moz-device-pixel-ratio: 1.5),
only screen and (min-device-pixel-ratio: 1.5) {
}

/* iPhone Portrait */
@media screen and (max-device-width: 480px) and (orientation:portrait) {
}

/* iPhone Landscape */
@media screen and (max-device-width: 480px) and (orientation:landscape) {
}

/* iPad Portrait */
@media screen and (min-device-width: 481px) and (orientation:portrait) {
}

/* iPad Landscape */
@media screen and (min-device-width: 481px) and (orientation:landscape) {
}

<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" />


/*------------------------------------------
Live demo samples
- http://andrelion.github.io/mediaquery/livedemo.html
--------------------------------------------*/

+Media Tag (Sept. 2, 2015, 4:44 p.m.)

@media (max-width: 767px) {
#inner-coffee-machine > div > img {
width: 30%;
height: 18%;
}

#inner-coffee-machine > div > div h3 {
font-size: 2.5vh;
font-weight: bold;
}

#inner-coffee-machine > div > div h5 {
font-size: 2vh;
}

#club-inner {
display: inline-table;
}

#inner-coffee-machine > div > div {
width: 100%;
}
}

@media (min-width: 768px) and (max-width: 991px) {

}

@media (min-width: 992px) and (max-width: 1199px) {

}

@media (min-width: 1200px) {

}

+Define new font (Sept. 1, 2015, 11:21 a.m.)

@font-face {
font-family: nespresso;
src: url("../fonts/nespresso.otf") format("opentype"),
url("../fonts/nespresso.ttf") format("truetype");
}

@font-face {
font-family: 'yekan';
src: url(../fonts/yekan.eot) format("eot"),
url(../fonts/yekan.woff) format("woff"),
url(../fonts/yekan.ttf) format("truetype");
}

+CSS for different IE versions (July 27, 2015, 1:40 p.m.)

IE-6 ONLY

* html #div {
height: 300px;
}
----------------------------------------------------------------------------
IE-7 ONLY

*+html #div {
height: 300px;
}
----------------------------------------------------------------------------
IE-8 ONLY

#div {
height: 300px\0/;
}
----------------------------------------------------------------------------
IE-7 & IE-8

#div {
height: 300px\9;
}
----------------------------------------------------------------------------
NON IE-7 ONLY:

#div {
_height: 300px;
}
----------------------------------------------------------------------------
Hide from IE 6 and LOWER:

#div {
height/**/: 300px;
}
----------------------------------------------------------------------------
html > body #div {
height: 300px;
}

+Fonts (July 13, 2015, 1:15 p.m.)

http://www.caritorsolutions.com/blog/162-how-to-use-font-awesome-icons
http://astronautweb.co/snippet/font-awesome/

+white-space (July 9, 2015, 3:44 a.m.)

white-space: normal;
The text will wrap.
-------------------------------
If you want to prevent the text from wrapping, you can apply:
white-space: nowrap;
-------------------------------
If we want to force the browser to display line breaks and extra white space characters we can use:
white-space: pre;
-------------------------------
If you want white space and breaks, but you need the text to wrap instead of potentially break out of its parent container:
white-space: pre-wrap;
-------------------------------
white-space: pre-line;
Will break lines where they break in code, but extra white space is still stripped.

Dart
+Map - forEach (April 10, 2020, 12:19 a.m.)

Map<int, String> _levels = <int, String>{
0: 'All Levels',
1: 'Beginner',
2: 'Intermedidate',
3: 'Advanced'
};

_levels.forEach((int value, String title) {
items.add(new DropdownMenuItem(
value: value.toString(),
child: new Text(title),
));

+Getters and setters (March 27, 2020, 11:44 p.m.)

You can define getters and setters whenever you need more control over a property than a simple field allows.

For example, you can make sure a property’s value is valid:

class MyClass {
int _aProperty = 0;

int get aProperty => _aProperty;

set aProperty(int value) {
if (value >= 0) {
_aProperty = value;
}
}
}

You can also use a getter to define a computed property:

class MyClass {
List<int> _values = [];

void addValue(int value) {
_values.add(value);
}

// A computed property.
int get count {
return _values.length;
}
}

+Cascades (March 27, 2020, 11:42 p.m.)


Cascades

To perform a sequence of operations on the same object, use cascades (..).

myObject.someMethod()

It invokes someMethod() on myObject, and the result of the expression is the return value of someMethod().

Here’s the same expression with a cascade:

myObject..someMethod()

Although it still invokes someMethod() on myObject, the result of the expression isn’t the return value — it’s a reference to myObject! Using cascades, you can chain together operations that would otherwise require separate statements. For example, consider this code:

var button = querySelector('#confirm');
button.text = 'Confirm';
button.classes.add('important');
button.onClick.listen((e) => window.alert('Confirmed!'));

With cascades, the code becomes much shorter, and you don’t need the button variable:

querySelector('#confirm')
..text = 'Confirm'
..classes.add('important')
..onClick.listen((e) => window.alert('Confirmed!'));

+Arrow syntax (March 27, 2020, 11:41 p.m.)

bool hasEmpty = aListOfStrings.any((s) => s.isEmpty);


bool hasEmpty = aListOfStrings.any((s) {
return s.isEmpty;
});

----------------------------------------------------------------

+Collection literals (March 27, 2020, 11:38 p.m.)

Dart has built-in support for lists, maps, and sets. You can create them using literals:

final aListOfStrings = ['one', 'two', 'three'];
final aSetOfStrings = {'one', 'two', 'three'};
final aMapOfStringsToInts = {
'one': 1,
'two': 2,
'three': 3,
};

--------------------------------------------------------------------

Dart’s type inference can assign types to these variables for you. In this case, the inferred types are List<String>, Set<String>, and Map<String, int>.

Or you can specify the type yourself:

final aListOfInts = <int>[];
final aSetOfInts = <int>{};
final aMapOfIntToDouble = <int, double>{};

Specifying types is handy when you initialize a list with contents of a subtype, but still want the list to be List<BaseType>:

final aListOfBaseType = <BaseType>[SubType(), SubType()];

--------------------------------------------------------------------

+String interpolation (March 27, 2020, 11:36 p.m.)

'${3 + 2}' '5'

'${"word".toUpperCase()}' 'WORD'

'$myObject' The value of myObject.toString()

+Null-aware Operators (March 27, 2020, 11:31 p.m.)

??

Use ?? when you want to evaluate and return an expression IFF another expression resolves to null.

exp ?? otherExp

is similar to

((x) => x == null ? otherExp : x)(exp)

-------------------------------------------------------------------------

??=

Use ??= when you want to assign a value to an object IFF that object is null. Otherwise, return the object.

obj ??= value

is similar to

((x) => x == null ? obj = value : x)(obj)

-------------------------------------------------------------------------

?.

Use ?. when you want to call a method/getter on an object IFF that object is not null (otherwise, return null).

obj?.method()

is similar to

((x) => x == null ? null : x.method())(obj)

You can chain ?. calls, for example:

obj?.child?.child?.getter

If obj, or child1, or child2 are null, the entire expression returns null. Otherwise, getter is called and returned.

-------------------------------------------------------------------------

?…

Dart 2.3 brings in a spread operator (…) and with it comes a new null aware operator, ?... !

Placing ... before an expression inside a collection literal unpacks the result of the expression and inserts its elements directly inside the new collection.

So now, these two are equivalent.

List numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];

and

List lowerNumbers = [1, 2, 3, 4, 5];
List upperNumbers = [6, 7, 8, 9, 10];
List numbers = […lowerNumbers…upperNumbers];

To benefit from the new null aware operator, you can use it like this.

List lowerNumbers = [1, 2, 3, 4, 5];
List upperNumbers = [6, 7, 8, 9, 10];
List numbers = […lowerNumbers?…upperNumbers];

which is the equivalent to

List numbers = [];
numbers.addAll(lowerNumbers);
if(upperNumbers != null){
numbers.addAll(upperNumbers);
}

-------------------------------------------------------------------------

+Array utility methods (March 27, 2020, 8:36 p.m.)

forEach()

var fruits = [‘banana’, ‘pineapple’, ‘watermelon’];
fruits.forEach((fruit) => print(fruit)); // => banana pineapple watermelon

-------------------------------------------------------------------------

map()

var mappedFruits = fruits.map((fruit) => ‘I love $fruit’).toList();
print(mappedFruits); // => ['I love banana', ‘I love pineapple’, ‘I love watermelon’]

-------------------------------------------------------------------------

contains()

var numbers = [1, 3, 2, 5, 4];
print(numbers.contains(2)); // => true

-------------------------------------------------------------------------

sort()

numbers.sort((num1, num2) => num1 - num2); // => [1, 2, 3, 4, 5]

-------------------------------------------------------------------------

reduce(), fold()

Compresses the elements to a single value, using the given function.


var sum = numbers.reduce((curr, next) => curr + next);
print(sum); // => 15

const initialValue = 10;
var sum2 = numbers.fold(initialValue, (curr, next) => curr + next);
print(sum2); // => 25

-------------------------------------------------------------------------

every()

Confirms that every element satisfies the test.

List<Map<String, dynamic>> users = [
{ “name”: ‘John’, “age”: 18 },
{ “name”: ‘Jane’, “age”: 21 },
{ “name”: ‘Mary’, “age”: 23 },
];

var is18AndOver = users.every((user) => user[“age”] >= 18);
print(is18AndOver); // => true

var hasNamesWithJ = users.every((user) => user[“name”].startsWith('J'));
print(hasNamesWithJ); // => false

-------------------------------------------------------------------------

where(), firstWhere(), singleWhere()

Returns a collection of elements that satisfy a test.

// See the example above for the users list
var over21s = users.where((user) => user[“age”] > 21);
print(over21s.length); // => 1

var nameJ = users.firstWhere((user) => user[“name”].startsWith(‘J’), orElse: () => null);
print(nameJ); // => {name: John, age: 18}

var under18s = users.singleWhere((user) => user[“age”] < 18, orElse: () => null);
print(under18s); // => null

firstWhere() returns the first match in the list, while singleWhere() returns the first match provided there is exactly one match.

-------------------------------------------------------------------------

take(), skip()

Returns a collection while including or skipping elements.

var fiboNumbers = [1, 2, 3, 5, 8, 13, 21];
print(fiboNumbers.take(3).toList()); // => [1, 2, 3]
print(fiboNumbers.skip(5).toList()); // => [13, 21]
print(fiboNumbers.take(3).skip(2).take(1).toList()); // => [3]

-------------------------------------------------------------------------

List.from()

Creates a new list from the given collection.

var clonedFiboNumbers = List.from(fiboNumbers);
print(‘Cloned list: $clonedFiboNumbers’);

-------------------------------------------------------------------------

expand()

Expands each element into zero or more elements.

var pairs = [[1, 2], [3, 4]];
var flattened = pairs.expand((pair) => pair).toList();
print(‘Flattened result: $flattened’); // => [1, 2, 3, 4]

var input = [1, 2, 3];
var duplicated = input.expand((i) => [i, i]).toList();
print(duplicated); // => [1, 1, 2, 2, 3, 3]

-------------------------------------------------------------------------

+Filtering list values (March 27, 2020, 8:31 p.m.)

List languages = new List();
languages.add('Python');
languages.add('Perl');
languages.add('Dart');
List short = languages.where((l) => l.length < 5).toList();
print(short); // [Perl, Dart]

------------------------------------------------------------------

var fruits = ['apples', 'oranges', 'bananas'];
fruits.where((f) => f.startsWith('a')).toList(); //apples

------------------------------------------------------------------

_AnimatedMovies = AllMovies.where((i) => i.isAnimated).toList();

------------------------------------------------------------------

+Constructors (March 27, 2020, 4:20 a.m.)

class Student {
int id = -1;
String name;

Student(this.id, this.name); // Parameterised Constructor

Student.myCustomConstructor() { // Named Constructor
print("This is my custom constructor");
}

Student.myAnotherNamedConstructor(this.id, this.name); // Named Constructor

void study() {
print("${this.name} is now studying");
}

void sleep() {
print("${this.name} is now sleeping");
}
}

+Exception Handling (March 27, 2020, 4:13 a.m.)

void main() {

print("CASE 1");
// CASE 1: When you know the exception to be thrown, use ON Clause
try {
int result = 12 ~/ 0;
print("The result is $result");
} on IntegerDivisionByZeroException {
print("Cannot divide by Zero");
}

print(""); print("CASE 2");
// CASE 2: When you do not know the exception use CATCH Clause
try {
int result = 12 ~/ 0;
print("The result is $result");
} catch (e) {
print("The exception thrown is $e");
}

print(""); print("CASE 3");
// CASE 3: Using STACK TRACE to know the events occurred before Exception was thrown
try {
int result = 12 ~/ 0;
print("The result is $result");
} catch (e, s) {
print("The exception thrown is $e");
print("STACK TRACE \n $s");
}

print(""); print("CASE 4");
// CASE 4: Whether there is an Exception or not, FINALLY Clause is always Executed
try {
int result = 12 ~/ 3;
print("The result is $result");
} catch (e) {
print("The exception thrown is $e");
} finally {
print("This is FINALLY Clause and is always executed.");
}

print(""); print("CASE 5");
// CASE 5: Custom Exception
try {
depositMoney(-200);
} catch (e) {
print(e.errorMessage());
} finally {
// Code
}
}

class DepositException implements Exception {
String errorMessage() {
return "You cannot enter amount less than 0";
}
}

void depositMoney(int amount) {
if (amount < 0) {
throw new DepositException();
}
}

+Basics (March 27, 2020, 2:56 a.m.)

int age = 32;
var age = 32;

They're both the same.

--------------------------------------------------------------------------

int result = 12 ~/ 4; // There will be a warning "A value of type double can't be assigned to int."

int result = 12 ~/ 4; // This way it will return the result in form of integer.

--------------------------------------------------------------------------

Final and Const:

If you never want to change a value then use "final" and "const" keywords.

final cityName = 'Tehran';
const PI = 3.14


The "final" variable can only be set once and it is initialized when accessed.

The "const" variable is implicitly final but it is a compile-time constant, i.e. it is initialized during compilation.


class Circle {
final color = 'red';
static const PI = 3.14; // Only static fields can be declared as const.
}

--------------------------------------------------------------------------

Conditional Expressions - Ternary Operator:

int a = 2;
int b = 3;
a < b ? print("$a is smaller") : print("$b is smaller");

smallerNumber = a < b ? a : b;

--------------------------------------------------------------------------

Conditional Expressions -Ternary Operator:

String name = 'Mohsen';
String nameToPrint = name ?? 'Hassani'; // It will print "Mohsen".

String name;
String nameToPrint = name ?? 'Hassani'; // It will print "Hassani".

--------------------------------------------------------------------------

For Loop:

List colorNames = ["Blue", "Yello", "Green", "Red"];
for (String color in colorNames) {
print(color);
}

--------------------------------------------------------------------------

Do-While Loop:

int i = 1;
do {
print('Hello');
i++;
} while (i <= 10);




int i = 1;
do {

if ( i % 2 == 0 ) {
print('Hello');
}

i++;
} while (i <= 10);

--------------------------------------------------------------------------

Break Keyword:

myOuterLoop: for ( int i = 1; i <= 3; i++ ) {

innerLoop: for ( int j = 1; j <= 3; j++ ) {
print("$i $j");

if ( i == 2 && j == 2 ) {
break myOuterLoop;
}
}

--------------------------------------------------------------------------

Optional Positional Parameters in Functions:

void printCountries(String name1, [String name2, String name3]) {
print("$name1");
print("$name2");
print("$name3");
}

printCountries("Iran") // Prints Iran, null, null

--------------------------------------------------------------------------

Optional Named Parameters:

int findVolume(int length, {int breadth, int height}) {
print("Volume is ${length * breadth * height}");
}


findVolume(10, breadth: 5, height: 20);

--------------------------------------------------------------------------

Optional Default Parameters:

int findVolume(int length, {int breadth = 2, int height = 20}) {
print("Volume is ${length * breadth * height}");
}

--------------------------------------------------------------------------

DevOps
+Nginx, Gunicorn, WSGI, uWSGI - Descriptions (Sept. 15, 2020, 10:29 a.m.)

Nginx: listens on port 80 for incoming HTTP requests from the internet.

Gunicorn: listens on another port(8000 is the popular one) for HTTP requests from Nginx. Gunicorn is configured with our Django web app. It serves dynamic contents passed from Nginx. (Note that Gunicorn can handle static(CSS/JS/images) but Nginx is better optimized for it. So, we need both Nginx and Gunicorn for a proper Django deployment.

WSGI (Web Server Gateway Interface) servers (such as Gunicorn, uWSGI, or mod_wsgi). A web server faces the outside world. It can serve files (HTML, images, CSS, etc) directly from the file system. However, it can't talk directly to Django applications; it needs something that will run the application, feed it requests from web clients (such as browsers) and return responses.

With Nginx, mod_wsgi is out of the picture, and we have to choose between Gunicorn and uWSGI : WSGI sits between Nginx(webserver) and Django (python app). The WSGI server doesn't talk to our Django project, it imports our Django project. It does something like this:

from mysite.wsgi import application
application(args)


uWSGI: It is a WSGI (python standard) implementation. The uWSGI is a fully-featured application server. Generally, uWSGI is paired with a reverse proxy (such as Nginx). It creates a Unix socket, and serves responses to the web server via the uWSGI protocol.

the web client <-> the web server <-> the socket <-> uwsgi <-> Django


-------------------------------------------------------------------

Gunicorn WSGI HTTP server

Gunicorn is a pure-Python WSGI HTTP server and it has no dependencies and is easy to install and use.

As a Python HTTP server, Gunicorn interfaces with both Nginx and our actual python web-app code to serve dynamic content while Nginx is responsible for serving static content and others.

We can test Gunicorn by typing:
gunicorn --bind 0.0.0.0:8000 mysite.wsgi:application

Or simpler command:
gunicorn mysite.wsgi

-------------------------------------------------------------------

+RabbitMQ (Sept. 3, 2020, 10:37 p.m.)

RabbitMQ is a message broker: it accepts and forwards messages. You can think about it as a post office: when you put the mail that you want posting in a post box, you can be sure that Mr. or Ms. Mailperson will eventually deliver the mail to your recipient. In this analogy, RabbitMQ is a post box, a post office, and a postman.

The major difference between RabbitMQ and the post office is that it doesn't deal with paper, instead it accepts, stores, and forwards binary blobs of data ‒ messages.


RabbitMQ is an open-source middleware message solution that natively uses AMQP communications but it has a good selection of plug-ins to support features like MQTT, MQTT Web Sockets, HTTP REST API, and server-to-server communications.

+Message Brokers (Sept. 3, 2020, 1:03 p.m.)

What is a message broker?

A message broker is a software that enables applications, systems, and services to communicate with each other and exchange information. The message broker does this by translating messages between formal messaging protocols. This allows interdependent services to “talk” with one another directly, even if they were written in different languages or implemented on different platforms.


Asynchronous messaging refers to the type of inter-application communication that message brokers make possible. It prevents the loss of valuable data and enables systems to continue functioning even in the face of the intermittent connectivity or latency issues common on public networks.

-----------------------------------------------------------------------

Message brokers vs. APIs

REST APIs are commonly used for communications between microservices. The term Representational State Transfer (REST) defines a set of principles and constraints that developers can follow when building web services.

Any services that adhere to them will be able to communicate via a set of uniform shared stateless operators and requests. Application Programming Interface (API) denotes the underlying code that, if it conforms to REST rules, allows the services to talk to one another.

REST APIs use Hypertext Transfer Protocol (HTTP) to communicate. Because HTTP is the standard transport protocol of the public Internet, REST APIs are widely known, frequently used, and broadly interoperable. HTTP is a request/response protocol, however, so it is best used in situations that call for a synchronous request/reply. This means that services making requests via REST APIs must be designed to expect an immediate response. If the client receiving the response is down, the sending service will be blocked while it awaits the reply. Failover and error handling logic should be built into both services.

Message brokers enable asynchronous communications between services so that the sending service need not wait for the receiving service’s reply. This improves fault tolerance and resiliency in the systems in which they’re employed. In addition, the use of message brokers makes it easier to scale systems since a pub/sub messaging pattern can readily support changing numbers of services. Message brokers also keep track of consumers’ states.

-----------------------------------------------------------------------

+RabbitMQ and server concepts (Sept. 3, 2020, 1 p.m.)

Producer: Application that sends the messages.

Consumer: Application that receives the messages.

Queue: Buffer that stores messages.

Message: Information that is sent from the producer to a consumer through RabbitMQ.

Connection: A TCP connection between your application and the RabbitMQ broker.

Channel: A virtual connection inside a connection. When publishing or consuming messages from a queue - it's all done over a channel.

Exchange: Receives messages from producers and pushes them to queues depending on rules defined by the exchange type. To receive messages, a queue needs to be bound to at least one exchange.

Binding: A binding is a link between a queue and an exchange.

Routing key: A key that the exchange looks at to decide how to route the message to queues. Think of the routing key like an address for the message.

AMQP: Advanced Message Queuing Protocol is the protocol used by RabbitMQ for messaging.

Users: It is possible to connect to RabbitMQ with a given username and password. Every user can be assigned permissions such as rights to read, write and configure privileges within the instance. Users can also be assigned permissions for specific virtual hosts.

Vhost, virtual host: Provides a way to segregate applications using the same RabbitMQ instance. Different users can have different permissions to different vhost and queues and exchanges can be created, so they only exist in one vhost.

+Message queue and Message broker differences (Sept. 3, 2020, 12:47 p.m.)

A message queue is a data structure, or a container - a way to hold messages for eventual consumption.

A message broker is a separate component that manages queues.

------------------------------------------------------------------

Message Broker is built to extend MQ, and it is capable of understanding the content of each message that it moves through the Broker.

------------------------------------------------------------------

+What features do orchestration tools offer? (July 18, 2020, 3:31 p.m.)

- High Availability or no downtime
- Scalability or high performance
- Disaster recovery - backup and restore

+HA - High Availability (July 18, 2020, 3:27 p.m.)

High Availability means that the application has no downtime so it's always accessible by users.

+Docker Swarm vs Kubernetes (July 18, 2020, 2:39 p.m.)

Kubernetes is much more complex to install and set-up because it is more complex with a high learning curve, but more powerful.

Docker Swarm is more lightweight, however is limited in its functionalities.

--------------------------------------------------------------------

Kubernetes supports auto-scaling.

Docker Swarm needs manual scaling to be configured.

--------------------------------------------------------------------

Kubernetes has built-in monitoring.

Docker Swarm depends on third-party tools for monitoring.

--------------------------------------------------------------------

Kubernetes needs to set up load balancing manually.

Docker Swarm supports auto load balancing.

--------------------------------------------------------------------

Kubernetes, you need to learn a new separate CLI tool, which is the KubeCTL.

Docker Swarm, you actually have the same docker command line that you use with Docker. You don't need a separate CLI tool.

--------------------------------------------------------------------

+Staging (July 17, 2020, 10:01 a.m.)

A stage, staging, or pre-production environment is an environment for testing that exactly resembles a production environment. It seeks to mirror an actual production environment as closely as possible and may connect to other production services and data, such as databases.

For example, servers will be run on remote machines, rather than locally (as on a developer's workstation during dev, or on a single test machine during the test), which tests the effects of networking on the system.

+Pipeline (July 16, 2020, 3:10 a.m.)

Pipelines are the top-level component of continuous integration, delivery, and deployment.

Pipelines provide an extensible set of tools for modeling build, testing, and deploying code. All jobs in a stage are executed simultaneously and, if it succeeds, the pipeline moves on to the next stage. If one of the jobs fails, as a rule, the next stage is not executed.


There are the following pipeline job steps:
1. Build – compilation and packaging of the project.
2. Testing – automated testing with default data.
3. Staging – manual testing and decision on going live.
4. Production – manual.

+Gitlab Pipeline (July 16, 2020, 3:09 a.m.)

A typical pipeline might consist of four stages, executed in the following order:

- A build stage, with a job called compile.
- A test stage, with two jobs called test1 and test2.
- A staging stage, with a job called deploy-to-stage.
- A production stage, with a job called deploy-to-prod.

+CD (July 16, 2020, 3:07 a.m.)

Continuous Delivery adds that the software can be released to production at any time, often by automatically pushing changes to a staging system.

+CI (July 16, 2020, 3:06 a.m.)

Continuous Integration is the practice of merging all the code that is being produced by developers. The merging usually takes place several times a day in a shared repository. From within the repository, or production environment, building and automated testing are carried out that ensure no integration issues and the early identification of any problems.

+Kong (July 13, 2020, 6:16 p.m.)

Kong is an API gateway built on top of Nginx.

----------------------------------------------------------------------------------

Kong is Orchestration Microservice API Gateway. Kong provides a flexible abstraction layer that securely manages communication between clients and microservices via API. Also known as an API Gateway, API middleware, or in some cases Service Mesh.

----------------------------------------------------------------------------------

https://docs.konghq.com/enterprise/

----------------------------------------------------------------------------------

You can install it on your server or over Docker. The docker installation is below these instructions.


Installation on a server: (For installing over Docker go to the section at the bottom).
https://konghq.com/get-started/#install

1- Install kong:
apt install -y apt-transport-https curl lsb-core
echo "deb https://kong.bintray.com/kong-deb `lsb_release -sc` main" | sudo tee -a /etc/apt/sources.list
curl -o bintray.key https://bintray.com/user/downloadSubjectPublicKey?username=bintray
sudo apt-key add bintray.key
sudo apt-get update
sudo apt-get install -y kong


2- Copy the configuration file:
cp /etc/kong/kong.conf.default /etc/kong/kong.conf


3- Install PostgreSQL. Provision a database with the name "kong" and a user with the name "kong".


4- Uncomment database variables in the configuration file /etc/kong/kong.conf:
database = postgres
pg_host
pg_port
pg_timeout
pg_user
pg_password
pg_database


5- Run the Kong migrations:
kong migrations bootstrap -c /etc/kong/kong.conf

----------------------------------------------------------------------------------

Install on Docker:
https://konghq.com/get-started/#install

1- Create a custom network:
docker network create kong-net


2- Download and run a dockerized PostgreSQL database:
docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
-e "POSTGRES_PASSWORD=kong" \
postgres:9.6


3- Prepare your database:
Run the migrations with an ephemeral Kong container.
docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_USER=kong" \
-e "KONG_PG_DATABASE=kong" \
-e "KONG_PG_PASSWORD=kong" \
kong:latest kong migrations bootstrap


4- Start Kong:
docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_USER=kong" \
-e "KONG_PG_DATABASE=kong" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 127.0.0.1:8001:8001 \
-p 127.0.0.1:8444:8444 \
kong:latest


5- Test it:
curl -i http://localhost:8001/

----------------------------------------------------------------------------------

Installing Kona:

1- Prepare Konga’s database by starting an ephemeral container:
docker run --rm \
--network=kong-net \
pantsel/konga -c prepare -a postgres -u postgresql://kong@kong-database:5432/konga_db


2- Running Konga on Docker:
docker run --rm -p 1337:1337 \
--network=kong-net \
-e "DB_ADAPTER=postgres" \
-e "DB_HOST=kong-database" \
-e "DB_USER=kong" \
-e "DB_DATABASE=konga_db" \
-e "KONGA_HOOK_TIMEOUT=120000" \
-e "NODE_ENV=production" \
--name konga \
pantsel/konga

----------------------------------------------------------------------------------

+Scalability (July 13, 2020, 4:29 p.m.)

Scalability is the ability of a program to scale. For example, if you can do something on a small database (say less than 1000 records), a program that is highly scalable would work well on a small set as well as working well on a large set (say millions, or billions of records).


Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.

+Docker Compose vs Docker Machine (July 11, 2020, 10:42 p.m.)

Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or DigitalOcean.

-------------------------------------------------------------------

Docker is the command-line tool that uses containerization to manage multiple images and containers and volumes and such -- a container is basically a lightweight virtual machine.

Until recently Docker didn't run on native Mac or Windows OS, so another tool was created, Docker-Machine, which creates a virtual machine (using yet another tool, e.g. Oracle VirtualBox), runs Docker on that VM, and helps coordinate between the host OS and the Docker VM.

Docker-Compose is essentially a higher-level scripting interface on top of Docker itself, making it easier to manage to launch several containers simultaneously. Its config file (docker-compose.yml) is confusing since some of its settings are passed down to the lower-level docker process, and some are used only at the higher level.

-------------------------------------------------------------------

Developers describe Docker Compose as "Define and run multi-container applications with Docker". With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. On the other hand, Docker Machine is detailed as "Machine management for a container-centric world". "Machine" lets you create Docker hosts on your computer, on cloud providers, and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

-------------------------------------------------------------------

+EC2 (July 11, 2020, 10:13 p.m.)

Amazon's Elastic Compute Cloud (EC2) is an offering that allows developers to provision and run their applications by creating instances of virtual machines in the cloud. EC2 also offers automatic scaling where resources are allocated based on the amount of traffic received.

Just like any other AWS offerings, EC2 can be easily integrated with the other Amazon services such as the Simple Queue Service (SQS), or Simple Storage Service (S3), among others.

+Can you use Kubernetes without Docker? (July 11, 2020, 9:53 p.m.)

As Kubernetes is a container orchestrator, it needs a container runtime in order to orchestrate. Kubernetes is most commonly used with Docker, but it can also be used with any container runtime. RunC, cri-o, containerd are other container runtimes that you can deploy with Kubernetes. The Cloud Native Computing Foundation (CNCF) maintains a listing of endorsed container runtimes on their ecosystem landscape page and Kubernetes documentation provides specific instructions for getting set up using ContainerD and CRI-O.

+Can you use Docker without Kubernetes? (July 11, 2020, 9:52 p.m.)

Docker is commonly used without Kubernetes, in fact this is the norm. While Kubernetes offers many benefits, it is notoriously complex and there are many scenarios where the overhead of spinning up Kubernetes is unnecessary or unwanted.

In development environments it is common to use Docker without a container orchestrator like Kubernetes. In production environments often the benefits of using a container orchestrator do not outweigh the cost of added complexity. Additionally, many public cloud services like AWS, GCP, and Azure provide some orchestration capabilities making the tradeoff of the added complexity unnecessary.

+How Does Kubernetes Relate to Docker? (July 11, 2020, 9:30 p.m.)

Kubernetes and Docker are both comprehensive de-facto solutions to intelligently manage containerized applications and provide powerful capabilities, and from this, some confusion has emerged. “Kubernetes” is now sometimes used as a shorthand for an entire container environment based on Kubernetes. In reality, they are not directly comparable, have different roots, and solve for different things.

Docker is a platform and tool for building, distributing and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and they are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.

+Kubernetes vs. Docker (July 11, 2020, 9:24 p.m.)

“Kubernetes vs. Docker” is a somewhat misleading phrase. When you break it down, these words don’t mean what many people intend them to mean, because Docker and Kubernetes aren’t direct competitors. Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker.

The technology that is actually comparable with Kubernetes, is Docker Swarm.
Docker Swarm is basically an alternative to Kubernetes which is a container orchestration tool.
Instead of Kubelets (the service that actually enables Docker to run in Kubernetes clusters nodes), you would have services called Docker Daemons that will run on each node and instead of the Kubernetes engine, you would just have Docker, that spends those multiple nodes that make up the cluster.

-----------------------------------------------------------------------------

Docker is a "container" technology, it creates an isolated environment for applications.
Kubernetes is an infrastructure for managing those containers.

-----------------------------------------------------------------------------

Docker automates building and deploying applications: CI/CD (before and when deploying)
Kubernetes automates scheduling and management of application containers (after container deployment)

-----------------------------------------------------------------------------

Docker platform is for configuring, building, and distributing containers.
Kubernetes is an ecosystem for managing a cluster of Docker containers.

-----------------------------------------------------------------------------

Docker is mainly used in the local development process, so when you're developing a sort of application you would use Docker containers for different services that your application depends on, like databases, message brokers, etc. It is also used in the CI process to build your application and package it into an isolated container environment.

Once built, that container gets stored or pushed into a private repository; so now is where Kubernetes actually comes into the game.

-----------------------------------------------------------------------------

+Kubernetes (July 11, 2020, 9:23 p.m.)

Kubernetes becomes ever more popular as a container orchestration solution.


Kubernetes is made up of many components that do not know or care about each other. The components all talk to each other through the API server. Each of these components operates its own function and then exposes metrics, that we can collect for monitoring later on. We can break down the components into three main parts.
- The Control Plane - The Master.
- Nodes - Where pods get scheduled.
- Pods - Holds containers.

+Security - XSS/CSRF/SQL injection (July 11, 2020, 9:49 a.m.)

Cross-site scripting (XSS):

XSS attacks enable an attacker to inject client-side scripts into browsers. Django templates protect your project from the majority of XSS attacks.

---------------------------------------------------------------------------

Cross-site request forgery (CSRF):

CSRF attacks allow a malicious user to execute actions using the credentials of another user. Django has built-in protection against most types of CSRF attacks.

---------------------------------------------------------------------------

SQL injection:

SQL injection is an attack where a malicious user is able to execute arbitrary SQL code on a database. Django’s querysets are protected from SQL injection since queries are constructed using parameterization.

---------------------------------------------------------------------------

+Microservices (July 11, 2020, 9:48 a.m.)

Microservices are an application architecture style where independent, self-contained programs with a single purpose each can communicate with each other over a network. Typically, these microservices are able to be deployed independently because they have a strong separation of responsibilities via a well-defined specification with significant backward compatibility to avoid sudden dependency breakage.


Successful applications begin with a monolith-first approach using a single, shared application codebase and deployment. Only after the application proves its usefulness is it then broken down into microservice components to ease further development and deployment. This approach is called the "monolith-first" or "MonolithFirst" pattern.


Microservices should follow the principle of single responsibility. A microservice only handles a single business logic.

+Telegraf (Dec. 9, 2018, 2:42 p.m.)

https://docs.influxdata.com/telegraf/v1.9/

-----------------------------------------------------------

Introduction:

Telegraf is a plugin-driven server agent for collecting & reporting metrics, and is the first piece of the TICK stack. Telegraf has plugins to source a variety of metrics directly from the system it’s running on, pull metrics from third party APIs, or even listen for metrics via a statsd and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.

-----------------------------------------------------------

Installation:

(Debian & Ubuntu are different! Take a loot at the link below.)
https://docs.influxdata.com/telegraf/v1.9/introduction/installation/

1- curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -

2- source /etc/lsb-release

3- echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

4- apt-get update && sudo apt-get install telegraf

5- service telegraf start

-----------------------------------------------------------

Configuration:

Create a configuration file with default input and output plugins:

telegraf config > telegraf.conf

-----------------------------------------------------------

+Let's Encrypt (May 26, 2018, 10:21 a.m.)

For selecting the operating system & hosting software, refer to the following link:

https://certbot.eff.org/

-----------------------------------------------------------------

Nginx:

1- apt install python-certbot-nginx


2- Add the following lines to your project nginx config:
location /.well-known {
alias /srv/me/.well-known;
}


3- /etc/init.d/nginx restart


4- certbot --authenticator webroot --installer nginx
When asked for "webroot", add "/srv/<your_project>/".

-----------------------------------------------------------------

I had to repeat step four 3 times to finally get it work! Each time it raised errors like:

Domain: pwa.tiptong.ir
Type: unauthorized
Detail: Invalid response from....
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.

I added "listen 443 ssl;" to tiptong.conf in nginx config files. I think it solved the above problem.

We were unable to install your certificate, however, we successfully restored your server to its prior configuration.

Running the same step 4 command, fixed the above error. Weird!

-----------------------------------------------------------------

+NSQ (Sept. 12, 2017, 9:45 a.m.)

Installation:

http://nsq.io/deployment/installing.html

1-Download and extract:
https://s3.amazonaws.com/bitly-downloads/nsq/nsq-1.0.0-compat.linux-amd64.go1.8.tar.gz

2-Copy:
cp nsq-1.0.0-compat.linux-amd64.go1.8/bin/* /usr/local/bin/

--------------------------------------------------------------------------------------

Quick Start:

1- In one shell, start nsqlookupd:
$ nsqlookupd

2- In another shell, start nsqd:
$ nsqd --lookupd-tcp-address=127.0.0.1:4160

3- In another shell, start nsqadmin:
$ nsqadmin --lookupd-http-address=127.0.0.1:4161

4- Publish an initial message (creates the topic in the cluster, too):
$ curl -d 'hello world 1' 'http://127.0.0.1:4151/pub?topic=test'

5- Finally, in another shell, start nsq_to_file:
$ nsq_to_file --topic=test --output-dir=/tmp --lookupd-http-address=127.0.0.1:4161

6- Publish more messages to nsqd:
$ curl -d 'hello world 2' 'http://127.0.0.1:4151/pub?topic=test'
$ curl -d 'hello world 3' 'http://127.0.0.1:4151/pub?topic=test'

7- To verify things worked as expected, in a web browser open http://127.0.0.1:4171/ to view the nsqadmin UI and see statistics. Also, check the contents of the log files (test.*.log) written to /tmp.

The important lesson here is that nsq_to_file (the client) is not explicitly told where the test topic is produced, it retrieves this information from nsqlookupd and, despite the timing of the connection, no messages are lost.

--------------------------------------------------------------------------------------

Clustering NSQ:

nsqlookup

nsqd --lookupd-tcp-address=10.10.0.101:4160,10.10.0.102:4160,10.10.0.103:4160

nsqadmin --lookupd-http-address=10.10.0.101:4161,10.10.0.102:4161,10.10.0.103:4161

--------------------------------------------------------------------------------------

+SNMP (May 1, 2017, 3:51 p.m.)

1- apt-get install snmp snmpd

2- /etc/snmp/snmpd.conf
Edit to:
agentAddress udp:0.0.0.0:161
view systemonly included .1

Add to the bottom:
com2sec readonly 10.10.0.198 public
com2sec readonly 10.10.0.199 public
com2sec readonly localhost public

3- /etc/init.d/snmpd restart

-------------------------------------------------------------------------

For checking if snmpd is running, and on what ip/port it's listening to, you can use:

netstat -apn | grep snmpd

-------------------------------------------------------------------------

Test the Configuration with an SNMP Walk:

snmpwalk -v1 -c public localhost
snmpwalk -v1 -c public 10.10.0.192

-------------------------------------------------------------------------

For getting information based on OID:

snmpwalk -v1 -c public localhost iso.3.6.1.2.1.1.1

The OID Tree:
http://www.oidview.com/mibs/712/LANART-AGENT.html

-------------------------------------------------------------------------

+Integrated Lights-Out (iLO) (Feb. 15, 2017, 5:35 p.m.)

Integrated Lights-Out (iLO) is a remote server management processor embedded on the system boards of HP ProLiant and Blade servers that allows controlling and monitoring of HP servers from a remote location. HP iLO management is a powerful tool that provides multiple ways to configure, update, monitor, and run servers remotely.

The embedded iLO management card has its own network connection and IP address to which server administrators can connect via Domain Name System (DNS)/Dynamic Host Configuration Protocol (DHCP) or through a separate dedicated management network. iLO provides a remote Web-based console, which can be used to administer the server remotely. The iLO port is an Ethernet port, which can be enabled through the ROM-Based Setup Utility (RBSU).

+Auto start script at boot time (Aug. 22, 2014, 11:39 a.m.)

To make a script run when the server starts and stops:

First make the script executable with this command:
sudo chmod 755 <path to the script>


Then:
sudo /usr/sbin/update-rc.d -f <path to the script> defaults

Django
+Benefits of uWSGI (Sept. 18, 2020, 9:14 p.m.)

- It's almost entirely configurable through environment variables (which fits well with containers)

-It includes native HTTP support, which can circumvent the need for a separate HTTP server like Apache or Nginx.

+Generate Secret Key (Aug. 26, 2020, 10:50 a.m.)

from django.core.management.utils import get_random_secret_key

print(get_random_secret_key())

+Translation (Aug. 11, 2020, 10:32 a.m.)

Contextual markers:

from django.utils.translation import pgettext

month = pgettext("month name", "May")

or:

from django.db import models
from django.utils.translation import pgettext_lazy

class MyThing(models.Model):
name = models.CharField(help_text=pgettext_lazy(
'help text for MyThing model', 'This is the help text'))

------------------------------------------------------------------------------------------

from django.utils.translation import gettext_lazy as _

from django.utils.text import format_lazy


success_message = format_lazy(
_('The {item} was updated successfully.'), item=_('bank')
)

------------------------------------------------------------------------------------------

+The paths in settings.py (Aug. 10, 2020, 4 p.m.)

Django >= 3.1


import sys
from pathlib import Path


BASE_DIR = Path(__file__).resolve(strict=True).parent.parent
sys.path.append(str(BASE_DIR / 'apps'))

LOCALE_PATHS = list(Path.cwd().rglob('locale'))

MEDIA_ROOT = BASE_DIR / 'tiptong' / 'media'
MEDIA_URL = '/media/'

STATIC_ROOT = BASE_DIR / 'tiptong' / 'static'
STATIC_URL = '/static/'

DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": str(BASE_DIR / "db.sqlite3"),
}
}




Example usage in views.py:
from django.conf import settings
json_path = settings.STATIC_ROOT / 'my_app/json/data.json'

----------------------------------------------------------------------------

Django <= 3.0


import os
import sys
import glob


BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

sys.path.insert(0, os.path.join(BASE_DIR, 'apps'))

LOCALE_PATHS = [
*glob.glob(BASE_DIR + '/*/*/locale', recursive=False)
]

MEDIA_ROOT = os.path.join(BASE_DIR, 'tiptong', 'media')
MEDIA_URL = '/media/'

STATIC_ROOT = os.path.join(BASE_DIR, 'tiptong', 'static')
STATIC_URL = '/static/'


DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": os.path.join(BASE_DIR, 'db.sqlite3'),
}
}

----------------------------------------------------------------------------

+Distinct on old database backends (Aug. 7, 2020, 10:47 p.m.)

agent_numbers = polls.order_by('agent_number').values_list('agent_number', flat=True).distinct()

+OPTIONS vs HEAD in API (July 28, 2020, 10:06 p.m.)

OPTIONS method returns info about API (methods/content type)

HEAD method returns info about resource (version/length/type)

Server response

OPTIONS

HTTP/1.1 200 OK
Allow: GET,HEAD,POST,OPTIONS,TRACE
Content-Type: text/html; charset=UTF-8
Date: Wed, 08 May 2013 10:24:43 GMT
Content-Length: 0


HEAD

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: text/html; charset=UTF-8
Date: Wed, 08 May 2013 10:12:29 GMT
ETag: "780602-4f6-4db31b2978ec0"
Last-Modified: Thu, 25 Apr 2013 16:13:23 GMT
Content-Length: 1270

+iterator (July 26, 2020, 3:34 p.m.)

Returns an iterator over the query results.

A QuerySet typically caches its results internally so that repeated evaluations do not result in additional queries. In contrast, iterator() will read results directly, without doing any caching at the QuerySet level (internally, the default iterator calls iterator() and caches the return value).

For a QuerySet which returns a large number of objects that you only need to access once, this can result in better performance and a significant reduction in memory.

Note that using iterator() on a QuerySet which has already been evaluated will force it to evaluate again, repeating the query.
Also, use of iterator() causes previous prefetch_related() calls to be ignored since these two optimizations do not make sense together.
Depending on the database backend, query results will either be loaded all at once or streamed from the database using server-side cursors.

+Hash passwords (July 16, 2020, 12:54 p.m.)

For bulk create you might need to set passwords like this:

from django.contrib.auth.hashers import make_password

password=make_password('A Password!')

+Running Tests (July 15, 2020, 10:51 p.m.)

python manage.py test api.v1.books.tests.books

+Measuring elapsed time (June 10, 2020, 10:46 a.m.)

from datetime import datetime

from django.utils.translation import gettext_lazy as _
from django.utils.timezone import make_aware, get_current_timezone


def get_elapsed_time(from_date_time_obj, short_version: bool) -> str:
now = make_aware(datetime.now(), get_current_timezone())
diff_time = now - from_date_time_obj
elapsed_days, elapsed_seconds = diff_time.days, diff_time.seconds

hours = int(elapsed_seconds / 3600)
minutes = int(elapsed_seconds % 3600 / 60)
seconds = int((elapsed_seconds % 3600) % 60)

# Short version
if short_version:

if elapsed_days:
return_value = '{:d} {}'.format(elapsed_days, _('days'))
elif hours:
return_value = '{:d} {}'.format(hours, _('hours'))
elif minutes:
return_value = '{:d} {}'.format(hours, _('minutes'))
elif seconds:
return_value = '{:d} {}'.format(hours, _('seconds'))
else:
return_value = ''

return '{} {}'.format(return_value, _('earlier'))

# Long version
# Seconds (We always have some "seconds" to display to the user)
duration_string = '{:02d} {} '.format(seconds, _('seconds'))

# Minutes
if minutes:
duration_string = '{:02d} {} {} '.format(
minutes,
_('minutes'),
_('and')
) + duration_string

# Hours
if hours:
duration_string = '{:02d} {} {} '.format(
hours,
_('hours'),
_('and')
) + duration_string

# Days
if elapsed_days:
duration_string = '{:02d} {} {} '.format(
elapsed_days,
_('days'),
_('and')
) + duration_string

return '{} {}'.format(duration_string, _('earlier'))

+Send Errors to Email (June 9, 2020, 10:07 a.m.)

1- settings.py:

ERRORS_SUBJECT = 'TipTong API'
ERRORS_FROM_EMAIL = 'devops@tiptong.ir'
ERRORS_RECIPIENTS = ['mohsen@mohsenhassani.com']



2-
from django.core.mail import send_mail
from django.conf import settings


try:
something_risky_code()
except Exception as e:
send_mail(
settings.ERRORS_SUBJECT,
str(e),
settings.ERRORS_FROM_EMAIL,
settings.ERRORS_RECIPIENTS
)

+Models - FileField upload_to (June 8, 2020, 11:37 p.m.)

import uuid


def upload_to(instance, filename):
return 'announcements/{}.wav'.format(uuid.uuid4())


class Announcement(models.Model):
voice = models.FileField(_('voice'), upload_to=upload_to)

+Queryset - Filter on ManyToMany count (May 17, 2020, 12:43 p.m.)

questions = Question.objects.annotate(num_answers=Count('answers')) \
.filter(num_answers__gt=4,
deleted=False,
image__gt=''
).order_by('translated_at')

+CBV - CheckboxSelectMultiple (May 16, 2020, 2:31 p.m.)

from django.forms.models import modelform_factory


class ModelFormWidgetMixin:
def get_form_class(self):
return modelform_factory(
self.model,
fields=self.fields,
widgets=self.widgets
)

-----------------------------------------------------------------------

class ProductCreate(ModelFormWidgetMixin, CreateView):
model = Product
fields = [
'name', 'image', 'units', 'default_unit', 'extra_info', 'is_enabled'
]
template_name = 'manager/occupations/products-create.html'
widgets = {
'units': forms.CheckboxSelectMultiple,
'extra_info': forms.CheckboxSelectMultiple
}

-----------------------------------------------------------------------

+Template - Get form field ID (May 16, 2020, 11:55 a.m.)

{{ field.auto_id }}


{{ field.id_for_label }}


{{ field.html_name }}

+Signals (May 16, 2020, 10:11 a.m.)

from django.db.models.signals import post_save
from django.dispatch import receiver


class Occupation(models.Model):
pass


@receiver(post_save, sender=Occupation)
def create_tag(sender, instance, created, **kwargs):
"""Create a tag for new occupation. Update the name if already created."""
if created:
Tag.objects.create(
name=instance.name,
content_type=ContentType.objects.get_for_model(instance),
object_id=instance.pk
)
else:
Tag.objects.filter(
content_type=ContentType.objects.get_for_model(instance),
object_id=instance.pk
).update(
name=instance.name
)

+GenericForeignKey (May 13, 2020, 5:40 p.m.)

from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType


class Tag(models.Model):
content_type = models.ForeignKey(
ContentType,
on_delete=models.SET_NULL,
blank=True,
null=True
)
object_id = models.CharField(max_length=50, blank=True, null=True)
content_object = GenericForeignKey('content_type', 'object_id')

------------------------------------------------------------------------------

from django.contrib.contenttypes.models import ContentType


Tag.objects.create(
name=instance.name,
content_type=ContentType.objects.get_for_model(instance),
object_id=instance.pk
)

------------------------------------------------------------------------------

Tag.objects.get(
content_type=ContentType.objects.get_for_model(instance),
object_id=instance.pk
).delete()

------------------------------------------------------------------------------

+Class-Based Views (CBV) (May 11, 2020, 12:24 p.m.)

https://ccbv.co.uk/projects/Django/3.0/django.views.generic.base/View/

+CBV - SuccessMessageMixin (May 11, 2020, 12:10 p.m.)

from django.contrib.messages.views import SuccessMessageMixin
from django.utils.text import format_lazy


class CategoryCreateView(SuccessMessageMixin, CreateView):
model = Category
fields = ['name']
success_message = format_lazy(
_('The {item} was created successfully.'), item=_('category')
)

+CBV - Set asterisk for required fields (May 11, 2020, 12:09 p.m.)

class CategoryCreateView(CreateView, ListView):

def get_form(self, form_class=None):
form = super().get_form(form_class)
form.required_css_class = 'required'
return form

+CBV - Delete old image on form save (May 10, 2020, 5:33 p.m.)

def form_valid(self, form):
data = form.cleaned_data

config = Config.objects.get(pk=1)
if config.occupations_default_image != data['occupations_default_image']:
config.occupations_default_image.delete()
if config.products_default_image != data['products_default_image']:
config.products_default_image.delete()

return super().form_valid(form)

+DRF - Save a list of object PKs (May 9, 2020, 11:36 a.m.)

class TimetableWriteSerializer(serializers.ModelSerializer):
timetables = serializers.PrimaryKeyRelatedField(
many=True,
queryset=Timetable.objects.filter(enable=True)
)

class Meta:
model = ExpertProfile
fields = ['timetables']

+Run django code in python file (May 9, 2020, 9:39 a.m.)

Add these two lines at the top of the file:

import django

django.setup()

+Models - limit_choices_to (May 8, 2020, 7:53 p.m.)

report_permissions = models.ManyToManyField(
Permission,
verbose_name=_('report permissions'),
limit_choices_to={'category': '1'}
)

------------------------------------------------------------------

taff_member = models.ForeignKey(
User,
on_delete=models.CASCADE,
limit_choices_to={'is_staff': True},
)

------------------------------------------------------------------

from django.db.models import Q


limit_choices_to=Q(share_holder=True) | Q(distributor=True)

------------------------------------------------------------------

product = models.ForeignKey(
Product,
limit_choices_to={
id__in=BaseModel._product_list,
},
)

------------------------------------------------------------------

+CBV - Simple View (May 4, 2020, 4:38 p.m.)

class RestoreQuestion(View):
def get(self, request, *args, **kwargs):
pk = self.kwargs.get('pk')

if pk:
Question.objects.filter(id=pk).update(deleted=False)
return HttpResponseRedirect(reverse_lazy('deletions:home'))
else:
return HttpResponseRedirect(reverse_lazy('home'))

+CBV - Pass Params to Form (May 4, 2020, 11:58 a.m.)

View:


class QuestionEditView(SuccessMessageMixin, UpdateView):
model = Question
template_name = 'expert_verified/edit.html'
success_message = format_lazy(
_('The {item} was updated successfully.'), item=_('question')
)
form_class = QuestionForm
context_object_name = 'question'

def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs.update({'request': self.request})
return kwargs





form:

class QuestionForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.request = kwargs.pop('request')
super().__init__(*args, **kwargs)
print(self.request.user)

+Makemessages exclude (April 11, 2020, 5:01 p.m.)

django-admin makemessages -i apps/ -l fa

+Thread in TemplateView (April 8, 2020, 10:59 a.m.)

class QuestionsImportView(TemplateView):
template_name = 'questions/index.html'

def get(self, request, *args, **kwargs):
threading.Thread(target=import_questions).start()
context = self.get_context_data(**kwargs)
return self.render_to_response(context)

+DRF - get_extra_kwargs method (March 3, 2020, 3:51 p.m.)

def get_extra_kwargs(self):
# Set "document" to non-required for partial-update action.
extra_kwargs = {'city': {'required': True}}
action = self.context['view'].action
if action == 'partial_update':
extra_kwargs['document'] = {'required': False}
return extra_kwargs

+DRF - Viewsets (March 1, 2020, 3:23 p.m.)

The ViewSet class inherits from APIView. You can use any of the standard attributes such as permission_classes, authentication_classes in order to control the API policy on the viewset.


A ViewSet class is simply a type of class-based View, that does not provide any method handlers such as .get() or .post(), and instead provides actions such as .list() and .create().


class UserViewSet(viewsets.ViewSet):
"""
A simple ViewSet for listing or retrieving users.
"""
def list(self, request):
queryset = User.objects.all()
serializer = UserSerializer(queryset, many=True)
return Response(serializer.data)

def retrieve(self, request, pk=None):
queryset = User.objects.all()
user = get_object_or_404(queryset, pk=pk)
serializer = UserSerializer(user)
return Response(serializer.data)

-----------------------------------------------------------------------------------

ViewSet actions:



def list(self, request):
pass

def create(self, request):
pass

def retrieve(self, request, pk=None):
pass

def update(self, request, pk=None):
pass

def partial_update(self, request, pk=None):
pass

def destroy(self, request, pk=None):
pass

-----------------------------------------------------------------------------------

The ViewSet class does not provide any implementations of actions. In order to use a ViewSet class you'll override the class and define the action implementations explicitly.

-----------------------------------------------------------------------------------

GenericViewSet


The GenericViewSet class inherits from GenericAPIView, and provides the default set of get_object, get_queryset methods and other generic view base behavior, but does not include any actions by default.

In order to use a GenericViewSet class you'll override the class and either mixin the required mixin classes, or define the action implementations explicitly.

-----------------------------------------------------------------------------------

ModelViewSet

The ModelViewSet class inherits from GenericAPIView and includes implementations for various actions, by mixing in the behavior of the various mixin classes.

The actions provided by the ModelViewSet class are .list(), .retrieve(), .create(), .update(), .partial_update(), and .destroy().

-----------------------------------------------------------------------------------

ReadOnlyModelViewSet

The ReadOnlyModelViewSet class also inherits from GenericAPIView. As with ModelViewSet it also includes implementations for various actions, but unlike ModelViewSet only provides the 'read-only' actions, .list() and .retrieve().

-----------------------------------------------------------------------------------

+Faker & Factoryboy (Feb. 24, 2020, 11:53 a.m.)

https://faker.readthedocs.io/en/latest/providers/faker.providers.lorem.html

https://factoryboy.readthedocs.io/en/latest/introduction.html

https://factoryboy.readthedocs.io/en/latest/recipes.html

https://faker.readthedocs.io/en/latest/providers.html

https://factoryboy.readthedocs.io/en/latest/reference.html#simple-parameters

----------------------------------------------------------------------

from accounts.factories import AccountFactory

AccountFactory()

AccountFactory.reset_sequence()

AccountFactory.reset_sequence(10)

----------------------------------------------------------------------

+Create Test Database Permission Denied (Jan. 22, 2020, 5:08 p.m.)

When running "python manage.py test" command, you might get the error:

"Got an error creating the test database: permission denied to create database"


For solving the problem, you need to give permission to your project database user:

sudo su
su postgres
psql
ALTER USER mohsen CREATEDB;

"mohsen" is your project database username.

+Test - RequestFactory vs Client (Jan. 22, 2020, 1:01 p.m.)

TestCase is used when you want to test an HTTP request and RequestFactory is used when you want to test the views by calling them inside Django.

-----------------------------------------------------------------------

Typically just use the test client. That'll ensure that you're testing your project more completely, as the full request-response cycle is under test, including routing and middleware.

-----------------------------------------------------------------------

Use RequestFactory if you want to write unit tests that require a request instance, or if you have some good reason to want to test just the view itself.

-----------------------------------------------------------------------

RequestFactory will be much quicker, which is important when you have a lot of tests. It is also only testing the part you want to test, which is a better unit test - you will more quickly know where the error is. You should still have some tests with TestCase to test the full HTTP request.

-----------------------------------------------------------------------

Testing a GET request

Before now, you may well have used the Django test client to test views. That is fine for higher-level tests, but if you want to test a view in isolation, it’s no use because it emulates a real web server and all of the middleware and authentication, which we want to keep out of the way. Instead, we need to use RequestFactory:

from django.test import RequestFactory

RequestFactory actually implements a subset of the functionality of the Django test client, so while it will feel somewhat familiar, it won’t have all the same functionality. For instance, it doesn’t support middleware, so rather than logging in using the test client’s login() method, you instead attach a user directly to the request, as in this example:

request = RequestFactory()

request.user = user

-----------------------------------------------------------------------

RequestFactory returns a request, while Client returns a response.

The RequestFactory does what it says - it's a factory to create request objects. Nothing more, nothing less.


The Client is used to fake a complete request-response cycle. It will create a request object, which it then passes through a WSGI handler. This handler resolves the URL, calls the appropriate middleware and runs the view. It then returns the response object. It has the added benefit that it gathers a lot of extra data on the response object that is extremely useful for testing.


The RequestFactory doesn't actually touch any of your code, but the request object can be used to test parts of your code that require a valid request. The Client runs your views, so in order to test your views, you need to use the Client and inspect the response.

-----------------------------------------------------------------------

+Files and Directories (Jan. 3, 2020, 3:42 p.m.)

os.makedirs(
settings.VOICEMAIL_ROOT +
'/deleted/request/{}'.format(voicemail_record.voicemail_id),
exist_ok=True)

os.makedirs(
settings.VOICEMAIL_ROOT +
'/deleted/response/{}'.format(voicemail_record.voicemail_id),
exist_ok=True)

---------------------------------------------------------------

shutil.move(voicemail_record.recording_path.path,
'{}/deleted/request/{}/'.format(
settings.VOICEMAIL_ROOT,
voicemail_record.voicemail_id)
)

if voicemail_record.reply_recording_path:
shutil.move(voicemail_record.reply_recording_path.path,
'{}/deleted/response/{}/'.format(
settings.VOICEMAIL_ROOT,
voicemail_record.voicemail_id)
)

---------------------------------------------------------------

+Excel Files (Dec. 24, 2019, 8:57 p.m.)

import xlrd


if request.POST and request.FILES:
excel_file = request.FILES['excel_file'].read()
book = xlrd.open_workbook(file_contents=excel_file)
sheet = book.sheet_by_index(0)
for row_num in range(sheet.nrows):
sheet.row_values(row_num)[1]

+DRF - to_representation vs to_internal_value (Dec. 23, 2019, 1:23 p.m.)

The to_representation() method is called to convert the initial datatype into a primitive, serializable datatype.


The to_internal_value() method is called to restore a primitive datatype into its internal python representation. This method should raise a "serializers.ValidationError" if the data is invalid.

------------------------------------------------------------------------------------------

def to_internal_value(self, data):
data_ = data.copy()

# Get registrar type
if data.get('registrar_type'):
if data['registrar_type'] == 'student':
data_['registrar_type'] = '1'
elif data['registrar_type'] == 'teacher':
data_['registrar_type'] = '2'

return super().to_internal_value(data_)

------------------------------------------------------------------------------------------

+CBV - Raise form error in form_valid() (Dec. 18, 2019, 4:07 p.m.)

def form_valid(self, form):
data = form.cleaned_data
if data['default_unit'].pk not in data['units'].values_list('pk', flat=True):
form.add_error('default_unit', _('The selected item does not exist in selected units.'))
return self.form_invalid(form)
return super().form_valid(form)

+CBV - Change form widget in generic CreateView/UpdateView (Dec. 18, 2019, 11:56 a.m.)

from django.forms.models import modelform_factory


class ModelFormWidgetMixin:
def get_form_class(self):
return modelform_factory(self.model,
fields=self.fields,
widgets=self.widgets)


----------------------------------------------------------------------

from django import forms


class ProductCreate(ModelFormWidgetMixin, CreateView):

widgets = {
'units': forms.CheckboxSelectMultiple
}


----------------------------------------------------------------------

+Queries - Use regex (Dec. 13, 2019, 12:26 a.m.)

The records whose caller_id field has only 3 diits:

Calls.objects.filter(caller_id__regex=r'^[0-9]{3}$')

-------------------------------------------------------------

+URLs - include (Dec. 8, 2019, 10:54 a.m.)

include(module, namespace=None)
include(pattern_list)
include((pattern_list, app_namespace), namespace=None)


application namespace:
This describes the name of the application that is being deployed. Every instance of a single application will have the same application namespace. For example, Django’s admin application has a somewhat predictable application namespace of 'admin'.

instance namespace:
This identifies a specific instance of an application. Instance namespaces should be unique across your entire project. However, an instance namespace can be the same as the application namespace. This is used to specify a default instance of an application. For example, the default Django admin instance has an instance namespace of 'admin'.

+Create uploadpath based on the current date (Nov. 30, 2019, 12:30 p.m.)

ef get_upload_path(instance, filename):
return os.path.join('account/avatars/', now().date().strftime("%Y/%m/%d"), filename)


class User(AbstractUser):
avatar = models.ImageField(blank=True, upload_to=get_upload_path)

+Modeling Polymorphism (Nov. 13, 2019, 1:48 p.m.)

Polymorphism is the ability of an object to take on many forms. Common examples of polymorphic objects include event streams, different types of users, and products in an e-commerce website. A polymorphic model is used when a single entity requires different functionality or information.

------------------------------------------------------------------

+ugettext_lazy, ugettext, ugettext_noop (Nov. 10, 2019, 1:27 p.m.)

ugettext_lazy holds a reference to the translation string instead of the actual translated text, so the translation occurs when the value is accessed rather than when they’re called.


When to use ugettext() or ugettext_lazy():

ugettext_lazy():
- models.py (fields, verbose_name, help_text, methods short_description);
- forms.py (labels, help_text, empty_label);
- apps.py (verbose_name).

ugettext():
- views.py
- Other modules similar to view functions that are executed during the request process

------------------------------------------------------------------

ugettext: The function returns the translation for the currently selected language.

ugettext_lazy: The function marks the string as translation string, but only fetches the translated string when it is used in a string context, such as when rendering a template.

ugettext_noop: This function only marks a string as a translation string, it does not have any other effect; that is, it always returns the string itself. The string is later translated from a variable. Use this if you have constant strings that should be stored in the source language because they are exchanged over systems or users – such as strings in a database – but should be translated at the last possible point in time, such as when the string is presented to the user.

------------------------------------------------------------------

ugettext_noop example:


import logging
from django.http import HttpResponse
from django.utils.translation import ugettext as _, ugettext_noop as _noop

def view(request):
msg = _noop("An error has occurred")
logging.error(msg)
return HttpResponse(_(msg))

------------------------------------------------------------------

+Managers (Aug. 15, 2019, 11:06 a.m.)

class MyManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(last_data__startswith='SIP/Mohsen')


class MyModel(models.Model):
...

objects = models.Manager()
my_objects = MyManager()

+FloatField vs DecimalField (July 31, 2019, 2:58 a.m.)

Always use DecimalField for money. Even simple operations (addition, subtraction) are not immune to float rounding issues.

-------------------------------------------------------------

DecimalField:

- DecimalFields must define a 'decimal_places' and a 'max_digits' attribute.

- You get two free form validations included here from the above required attributes, i.e. If you set max_digits to 4, and you type in a decimal that is 4.00000 (5 digits), you will get this error: Ensure that there are no more than 4 digits in total.

- You also get a similar form validation done for decimal places (which in most browsers will also validate on the front end using the step attribute on the input field. If you set decimal_places = 1 and type in 0.001 as the value you will get an error that the minimum value has to be 0.1.

- With a Decimal type, rounding is also handled for you due to the required attributes that need to be set.

- In the database (postgresql), the DecimalField is saved as a numeric(max_digits,decimal_laces) Type, and Storage is set as "main"

-------------------------------------------------------------

FloatField:

- No smart rounding, and can actually result in rounding issues as described in Seths answer.

- Does not have the extra form validation that you get from DecimalField

- In the database (postgresql), the FloatField is saved as a "double precision" Type, and Storage is set as "plain"

-------------------------------------------------------------

+Aggregation vs Annotation (July 22, 2019, 12:18 p.m.)

Aggregate calculates values for the entire queryset.
Aggregate generates result (summary) values over an entire QuerySet. It operates over the rowset to get a single value from the rowset.(For example sum of all prices in the rowset). It is applied on entire QuerySet and generates result (summary) values over an entire QuerySet.

Book.objects.aggregate(average_price=Avg('price'))
Returns a dictionary containing the average price of all books in the queryset.

-----------------------------------------------------------------------------

Annotate calculates summary values for each item in the queryset.
Annotate generates an independent summary for each object in a QuerySet.(We can say it iterates each object in a QuerySet and applies operation)

Annotation
>>> q = Book.objects.annotate(num_authors=Count('authors'))
>>> q[0].num_authors
2
>>> q[1].num_authors
1
q is the queryset of books, but each book has been annotated with the number of authors.

Annotation
videos = Video.objects.values('id', 'name','video').annotate(Count('user_likes',distinct=True)

+m2m (July 11, 2019, 9:07 p.m.)

For ModelForm just do:
form.sve()


If you had to use commit=False in form.save(), then you have to save the m2m manually:
if form.is_valid():
project = form.save(commit=False)
# Do something extra with "project" ....
project.save()
form.save_m2m()

---------------------------------------------------------

if form.fields.get('units'):
new_category.units.set(data['units'])

---------------------------------------------------------

+Style Admin Interface in admin.py (April 18, 2018, 7:39 a.m.)

class NoteAdmin(admin.ModelAdmin):
search_fields = ('title', 'note')
list_filter = ('category',)

class Media:
css = {
'all': ('admin/css/interface.css',)
}

-------------------------------------------------------------

The path to "interface.css" is:
Projects/notes/notes/static/admin/css/interface.css

-------------------------------------------------------------

And finally, I couldn't make "nginx" recognize this file. For solving the problem I had to comment the "location /static/admin/" block in nginx file, and do "collectstatic" in my project to just gather together all admin static files.

-------------------------------------------------------------

+Ajax (April 22, 2018, 7:08 p.m.)

$.ajax({
type: 'POST',
url: $(this).attr('href'),
data: {
csrfmiddlewaretoken: '{{ csrf_token }}',
},
dataType: 'json',
success: function (data) {

},
error: function () {

}
});

----------------------------------------------------------------------------

Examples of serializing data in views and return a response:


from django.core import serializers


def get_cities(request):
if request.is_ajax():
cities = City.objects.filter(province=request.POST['province_id'])
return HttpResponse(serialize('json', cities, fields=('pk', 'name')))

----------------------------------------------------------------------------

def delete_order(request, p_type, pid):
if request.is_ajax():
return JsonResponse({'orders_length': len(request.session['orders']),
'total_price': request.session['orders_total_price'],
'status': 'deleted'})

----------------------------------------------------------------------------

return HttpResponse('rejected', content_type='text/plain')

----------------------------------------------------------------------------

foos = Foo.objects.all()
data = serializers.serialize('json', foos)
return HttpResponse(data, mimetype='application/json')

----------------------------------------------------------------------------

import json

def json_response(something):
return HttpResponse(json.dumps(something), content_type='application/javascript; charset=UTF-8')

----------------------------------------------------------------------------

from django.core.serializers.json import DjangoJSONEncoder

def categories_view(request):
categories = Category.objects.annotate(notes_count=Count('notes__pk')).values('pk', 'name', 'notes_count')
data = json.dumps(list(categories), cls=DjangoJSONEncoder)
return HttpResponse(data, content_type='application/json')

----------------------------------------------------------------------------

For Django 1.7+

from django.http import JsonResponse
return JsonResponse({'foo': 'bar'})

Serializing non-dictionary objects
In order to serialize objects other than dict you must set the safe parameter to False:

return JsonResponse([1, 2, 3], safe=False)
Without passing safe=False, a TypeError will be raised.

----------------------------------------------------------------------------

If you need to serialize some fields of an object, you can not use this:
return JsonResponse({'products': serialize('json', Coffee.objects.all().values('id', 'name'))})

The correct way is:
return JsonResponse({'products': serialize('json', Coffee.objects.all(), fields=('id', 'name'))})

----------------------------------------------------------------------------

success: function (cities) {
$("#id_city").empty();
$('<option value="0"> ---------- </option>').appendTo('#id_city');
$.each(cities, function (idx, city) {
console.log(idx, city);
$('<option value="' + city['pk'] + '">' + city['fields']['name'] + '</option>').appendTo($('#id_city'));
});
},

----------------------------------------------------------------------------

+Django-2 Sample settings.py (April 29, 2018, 2:44 p.m.)

import os
import re


def gettext_noop(s):
return s

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

ROOT_URLCONF = 'mohsenhassani.urls'

DEBUG = True

ADMINS = [('Mohsen Hassani', 'Mohsen@MohsenHassani.com')]

ALLOWED_HOSTS = []
if DEBUG:
ALLOWED_HOSTS.extend(['localhost', '127.0.0.1'])

TIME_ZONE = 'Asia/Tehran'

USE_TZ = True

LANGUAGE_CODE = 'en-us'

LANGUAGES = [('en', gettext_noop('English')),
('fa', gettext_noop('Persian'))]

USE_I18N = True
LOCALE_PATHS = [
os.path.join(BASE_DIR, 'locale'),
]

USE_L10N = True

SERVER_EMAIL = 'report@mohsenhassani'

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'mohsenhassanidb',
'USER': 'root',
'PASSWORD': '',
}
}

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'mohsenhassani',
]

TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]

PREPEND_WWW = False

DISALLOWED_USER_AGENTS = [
re.compile(r'^NaverBot.*'),
re.compile(r'^EmailSiphon.*'),
re.compile(r'^SiteSucker.*'),
re.compile(r'^sohu-search'),
re.compile(r'^DotBot'),
]

IGNORABLE_404_URLS = [
re.compile(r'^/favicon.ico$'),
re.compile(r'^/robots.txt$'),
]

SECRET_KEY = 'xqb&)90m*_!n3ovc$@%mo8!8!7j5d9o=8nm(iyw%#mzz&o1n6)'

MEDIA_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'media/')
MEDIA_URL = '/media/'

STATIC_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'static/')
STATIC_URL = '/static/'

FILE_UPLOAD_MAX_MEMORY_SIZE = 52428800 # i.e. 50 MB

WSGI_APPLICATION = 'mohsenhassani.wsgi.application'

MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]

SESSION_EXPIRE_AT_BROWSER_CLOSE = True

AUTH_USER_MODEL = 'accounts.User'

LOGIN_URL = '/accounts/login/'

LOGIN_REDIRECT_URL = '/accounts/profile/'

LOGOUT_REDIRECT_URL = None

PASSWORD_RESET_TIMEOUT_DAYS = 3

AUTH_PASSWORD_VALIDATORS = []

NUMBER_GROUPING = 3

+Send HTML Email with Attachment (April 30, 2018, 6:40 p.m.)

from django.core.mail import EmailMessage

email = EmailMessage('subject',
'message',
'email_from',
['to_email']
)

email.content_subtype = "html"

if data['attachment']:
file_ = data['attachment']
email.attach(file_.name, file_.read(), file_.content_type)

email.send()

-----------------------------------------------------------------------------

for attachment in request.FILES:
if data[attachment]:
file_ = data[attachment]
email.attach(file_.name, file_.read(), file_.content_type)

-----------------------------------------------------------------------------

+URL - Login Required & is_superuser (May 1, 2018, 11:56 a.m.)

from django.contrib.auth.decorators import login_required
from django.contrib.auth.decorators import user_passes_test


urlpatterns = [
path('reports/', user_passes_test(lambda u: u.is_superuser)(
login_required(report.reports)), name='reports'),
]



oops....
iIt seems "user_passes_test" already does check the "login_required" somehow... so remove that decorator:

path('reports/', user_passes_test(lambda u: u.is_superuser)(report.reports), name='reports'),

+Database Functions, Aggregation, Annotations (June 16, 2018, 11:55 a.m.)

from django.db.models import F


OrgPayment.objects.update(shares=F('shares') / 70000)
Property.objects.filter(id=pid).update(views=F('views') + 1)

------------------------------------------------------------

from django.db.models import Count

Book.objects.annotate(num_authors=Count('authors')).order_by('num_authors')

------------------------------------------------------------

from django.db.models import Avg

Author.objects.annotate(average_rating=Avg('book__rating'))

------------------------------------------------------------

from django.db.models import Avg, Count

Book.objects.annotate(num_authors=Count('authors')).aggregate(Avg('num_authors'))

------------------------------------------------------------

Database Functions:

Coalesce:

from django.db.models import Sum, Value
from django.db.models.functions import Coalesce

certificates_total_hours = reward_request.chosen_certificates.aggregate(total_hours=Coalesce(Sum('course_hours'), Value(0)))

------------------------------------------------------------

Concat:

# Get the display name as "name (goes_by)"

from django.db.models import CharField, Value as V
from django.db.models.functions import Concat

Author.objects.create(name='Margaret Smith', goes_by='Maggie')
author = Author.objects.annotate(
screen_name=Concat('name', V(' ('), 'goes_by', V(')'),
output_field=CharField())).get()
print(author.screen_name)

------------------------------------------------------------

Length:

Accepts a single text field or expression and returns the number of characters the value has. If the expression is null, then the length will also be null.

from django.db.models.functions import Length


Author.objects.create(name='Margaret Smith')
author = Author.objects.annotate(
name_length=Length('name'),
goes_by_length=Length('goes_by')).get()
print(author.name_length, author.goes_by_length)

------------------------------------------------------------

Lower:

Accepts a single text field or expression and returns the lowercase representation.

Usage example:

>>> from django.db.models.functions import Lower
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_lower=Lower('name')).get()
>>> print(author.name_lower)
margaret smith

------------------------------------------------------------

Substr:

Returns a substring of length (length) from the field or expression starting at position pos. The position is 1-indexed, so the position must be greater than 0. If the length is None, then the rest of the string will be returned.

Usage example:

>>> # Set the alias to the first 5 characters of the name as lowercase
>>> from django.db.models.functions import Substr, Lower
>>> Author.objects.create(name='Margaret Smith')
>>> Author.objects.update(alias=Lower(Substr('name', 1, 5)))
1
>>> print(Author.objects.get(name='Margaret Smith').alias)
marga

------------------------------------------------------------

Upper:

Accepts a single text field or expression and returns the uppercase representation.


>>> from django.db.models.functions import Upper
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_upper=Upper('name')).get()
>>> print(author.name_upper)
MARGARET SMITH

------------------------------------------------------------

+Create directories if they don't exist (June 17, 2018, 6:09 p.m.)

import os

from django.conf import settings


avatar_path = '%s/images/avatars' % settings.MEDIA_ROOT
if not os.path.exists(os.path.dirname(avatar_path)):
os.makedirs(avatar_path)

+Serve media files in debug mode (April 15, 2019, 12:11 p.m.)

urls.py:
---------

from django.conf import settings
from django.conf.urls.static import static


if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

+Save file path to Django ImageField (June 17, 2018, 7:10 p.m.)

models.py:
-------------
avatar = models.ImageField(_('avatar'), upload_to='manager/images/avatars/', null=True, blank=True)


views:
--------
request.user.avatar.name = 'images/avatars/mohsen.png'
request.user.save()

+Forms - Validate Excel File (July 2, 2018, 10:53 a.m.)

from xlrd import open_workbook, XLRDError

from django import forms
from django.utils.translation import ugettext_lazy as _


class UploadExcelForm(forms.Form):
file = forms.FileField(label=_('file'))

def clean_excel_file(self):
excel_file = self.cleaned_data['excel_file']

try:
open_workbook(file_contents=excel_file.read())
excel_file.file.seek(0)
except XLRDError:
raise forms.ValidationError(_('Please upload a valid excel file.'))

return excel_file

+Messages (July 6, 2018, 8:57 p.m.)

View:
------------
from django.contrib import messages


messages.success(request, _('The information was saved successfully.'))
return HttpResponseRedirect(reverse('url', args=(code,)))

-----------------------------------------------------------------

Template:
------------

{% if messages %}
<ul class="messages">
{% for message in messages %}
<li {% if message.tags %} class="{{ message.tags }}" {% endif %}>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}

-----------------------------------------------------------------

{% if message.tags == 'success' %}

+QuerySet - Filter based on Text Length (July 16, 2018, 3:04 p.m.)

from django.db.models.functions import Length

invalid_username = Driver.objects.annotate(
text_len=Length('username')).filter(text_len__lt=11)

+QuerySet - Duplicate objects based on a specific field (July 16, 2018, 3:16 p.m.)

duplicate_plate_number_ids = Driver.objects.values(
'plate_number').annotate(Count('plate_number')).order_by().filter(
plate_number__count__gt=1).values_list('plate_number', flat=True)

+Bulk Insert / Bulk Create (Oct. 7, 2018, 11:07 a.m.)

entry_records = []

for i in range(2000):
entry_records.append(Entry(headline='This is a test'))


Entry.objects.bulk_create(entry_records)

+Force files to open in the browser instead of downloading (Oct. 9, 2018, 8:48 a.m.)

Force browser that the file should be viewed in the browser:

Content-Type: application/pdf
Content-Disposition: inline; filename="filename.pdf"


To have the file downloaded rather than viewed:

Content-Type: application/pdf
Content-Disposition: attachment; filename="filename.pdf"

+Find Model Relations (Oct. 17, 2018, 4:48 p.m.)

for field in [f for f in file._meta.get_fields() if not f.concrete]

----------------------------------------------------------------------

model = field.related_model

model = type(instance)

# For deferred instances
model = instance._meta.proxy_for_model

----------------------------------------------------------------------

app_label = model._meta.app_label

app_label = instance._meta.app_label

----------------------------------------------------------------------

model_name = model.__name__

----------------------------------------------------------------------

if field.get_internal_type() == 'ForeignKey':

----------------------------------------------------------------------

field.remote_field.name

----------------------------------------------------------------------

field.through.objects.filter(file_id=file.id)

----------------------------------------------------------------------

ct = ContentType.objects.get_for_model(model)

----------------------------------------------------------------------

model._meta.local_fields

----------------------------------------------------------------------

+Pass JSON object data from view to template (April 13, 2019, 11:32 a.m.)

View:

import json


data = json.dumps(the_dictionary)
return render(request, 'abc.html', {'data': data})

----------------------------------------------------

Template:

<script type="text/javascript">
{{ data|safe }}
</script>

+Form - Access Field type in template (Dec. 8, 2018, 12:23 p.m.)

{{ field.field.widget.input_type }}

+QuerySet - Group By (Dec. 14, 2018, 8:52 a.m.)

requests = Loan.objects.filter(loan__type='n',
status__status__in=['1', '2', '3'])
stats = requests.values('personnel__center__title'
).annotate(Count('id')).order_by()



{% for stat in stats %}
<tr>
<td>{{ forloop.counter }}</td>
<td>{{ stat.personnel__center__title }}</td>
<td>{{ stat.id__count }}</td>
</tr>
{% endfor %}

-----------------------------------------------------------------------

this_week_articles = Article.objects.filter(
created_at__gte=seven_days_ago,
deleted=False
).values('creating_user__first_name',
'creating_user__last_name').\
annotate(Count('pk')).order_by()


# Result is:
<QuerySet [{'creating_user__last_name': 'Hassani', 'creating_user__first_name': 'Mohsen', 'pk__count': 286}, {'creating_user__last_name': 'BiGheri', 'creating_user__first_name': 'Mehdi', 'pk__count': 31}]>

-----------------------------------------------------------------------

from itertools import groupby

def extract_call_id(call):
return call.call_id

grouped_call_ids = [list(g) for t, g in groupby(today_calls, key=extract_call_id)]

-----------------------------------------------------------------------

+Google reCAPTCHA API (Dec. 17, 2018, 12:55 p.m.)

1- Register your application in the reCAPTCHA admin:
https://www.google.com/recaptcha/admin#list


2- After registering your website, you will be handed a Site key and a Secret key. The Site key will be used in the reCAPTCHA widget which is rendered within the page where you want to place it. The Secret key will be stored safely in the server, made available through the settings.py module.
GOOGLE_RECAPTCHA_SECRET_KEY = ''


3- Add the following tag to the head:
<script src='https://www.google.com/recaptcha/api.js'></script>


4- Add the following tag to the form:
<div class="g-recaptcha" data-sitekey=""></div>


5- pip install requests


6- Views.py
import requests
from django.conf import settings

if request.POST:
recaptcha_response = request.POST.get('g-recaptcha-response')
data = {
'secret': settings.GOOGLE_RECAPTCHA_SECRET_KEY,
'response': recaptcha_response
}
response = requests.post(
'https://www.google.com/recaptcha/api/siteverify', data=data)
result = response.json()

if result['success']:

else:

+Split QuerySets (Dec. 17, 2018, 10:26 p.m.)

def chunks(items, length):
for chunk in range(0, len(items), length):
yield items[chunk:chunk + length]

------------------------------------------------------------

Usage Example:

excel_file = get_object_or_404(ExcelFile, id=eid)

job_list = list(chunks(excel_file.tempdata_set.all(), 250))

------------------------------------------------------------

+Get all related Django model objects (Dec. 30, 2018, 12:30 p.m.)

from django.db.models.deletion import Collector
from django.contrib.admin.utils import NestedObjects

user = User.objects.get(id=1)

collector = NestedObjects(using="default")
collector.collect([user])
print(collector.data)

+Admin - Render checkboxes for m2m (Jan. 13, 2019, 10:06 a.m.)

admin.py:

---------------------------------------------------------

from django.contrib.auth.admin import UserAdmin
from django.db import models
from django.forms import CheckboxSelectMultiple


class PersonnelAdmin(UserAdmin):
formfield_overrides = {
models.ManyToManyField: {'widget': CheckboxSelectMultiple}
}

+Truncate a long string (Jan. 27, 2019, 1:47 a.m.)

data = data[:75]

----------------------------------------------------------------------

import textwrap

textwrap.shorten("Hello world!", width=12)

textwrap.shorten("Hello world", width=10, placeholder="...")

----------------------------------------------------------------------

from django.utils.text import Truncator

value = Truncator(value).chars(75)

----------------------------------------------------------------------

+Model Conventions (Feb. 8, 2019, 7:53 a.m.)

https://steelkiwi.com/blog/best-practices-working-django-models-python/

+CSRF Token in an external javascript file (March 16, 2019, 2:11 p.m.)

function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie != '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = cookies[i].trim();
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) == (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}


// Then call it like the following:
getCookie('csrftoken');

+Forms - Validation (March 11, 2018, 4:29 p.m.)

class ReportForm1(forms.Form):
src_server_ip = forms.CharField(required=False)
dst_server_ip = forms.CharField(required=False)


def clean(self):
if self.cleaned_data['src_server_ip'] == '' and self.cleaned_data[
'dst_server_ip'] == '':
self.add_error('src_server_ip',
'At lease a source or destination is required.')

+URL Regex that accepts all characters (Jan. 20, 2018, 1:14 a.m.)

(.*)

+Forms - Custom ModelChoiceField (Nov. 15, 2017, 3:54 p.m.)

class AppointmentChoiceField(forms.ModelChoiceField):
def label_from_instance(self, appointment):
return "%s" % appointment.get_time()

--------------------------------------------------------------------

class IntCommaChoiceField(forms.ModelChoiceField):
def label_from_instance(self, base_amount):
return "%s" % intcomma(base_amount)

--------------------------------------------------------------------

class LoanAmountEditForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['base_amount'] = IntCommaChoiceField(
queryset=LoanBaseAmount.objects.all(),
label=_('base amount')
)

class Meta:
model = LoanAmount
exclude = []

--------------------------------------------------------------------

+JPG Validator (July 17, 2017, 10:23 a.m.)

from PIL import Image


def jpg_validator(certificate):
file_type = Image.open(certificate.file).format
certificate.file.seek(0)
if file_type == 'jpg' or file_type == 'JPEG':
return True
else:
raise ValidationError(_('The extension of certificate file should be jpg.'))

+Views - order_by sum of fields (June 10, 2017, 1:24 p.m.)

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},
order_by=('-sum',))


---------------------------------------------------

If you need to do some filtering, you can add filter() to the end:

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},
order_by=('-sum',)).filter(status='1')

+Use MySQL or MariaDB with Django (May 18, 2017, 10:11 p.m.)

1- Installation:
MySQL:
sudo apt-get install python-pip python-dev mysql-server libmysqlclient-dev

MariaDB:
sudo apt-get install python-pip python-dev mariadb-server libmariadbclient-dev libssl-dev


2- mysql -u root -p


3- CREATE DATABASE myproject CHARACTER SET UTF8;


4- CREATE USER myprojectuser@localhost IDENTIFIED BY 'password';


5- GRANT ALL PRIVILEGES ON myproject.* TO myprojectuser@localhost;


6- FLUSH PRIVILEGES;


7- exit

8- In the project environment:
pip install mysqlclient

+X-Frame-Options (Sept. 26, 2016, 9:05 p.m.)

Error in remote calling:
..does not permit cross-origin framing

Description:
There is a special header to allow or disallow showing page inside i-frame - X-Frame-Options It's used to prevent an attack called clickjacking. You can check the Django's doc about it https://docs.djangoproject.com/en/dev/ref/clickjacking/

Sites that want their content to be shown in i-frame just don't set this header.

In your installation of Django this protection is turned on by default. If you wan't to allow embedding your content inside i-frames you can either disable the clickjack protection in your settings for the whole site, or use per view control with:

django.views.decorators.clickjacking decorators

xframe_options_exempt
xframe_options_deny
xframe_options_sameorigin

Per view control is a better option.

--------------------------------------------------------------

Example:

from django.views.decorators.clickjacking import xframe_options_exempt

@xframe_options_exempt
def home(request):
#

+Django Session Key (Sept. 20, 2016, 8:58 p.m.)

if not request.session.exists(request.session.session_key):
request.session.create()
session_key = request.session.session_key

+Django REST Framework - Installation and Configuration (Sept. 20, 2016, 12:44 a.m.)

1-pip install djangorestframework django-filter markdown


2-Add 'rest_framework' to your INSTALLED_APPS setting.
INSTALLED_APPS = (
...
'rest_framework',
)


3-If you're intending to use the browsable API you'll probably also want to add REST framework's login and logout views. Add the following to your root urls.py file.

urlpatterns = [
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
]

+User Timezone (Sept. 5, 2016, 2:12 a.m.)

There are several plugins you can use, but I guess there are reasons I need to avoid using them:

- They mainly require big .dat files which contain the timezones allover the world

- They use middlewares to check the user's timezone, which might be called on every request and finally cause speed problem when opening pages.

- They only work with templates (using template tags and filters).

--------------------------------------------------------

The simplest way I have achieved is using a snippet which uses an online web service:
import requests
import pytz
user_time_zone = requests.get('http://freegeoip.net/json/').json()['time_zone']
timezone.activate(pytz.timezone(user_time_zone))

This snippet can be used in only the views which need to detect user's timezone; no need of middleware.

--------------------------------------------------------

If you ever needed to use it in every request, you can use it in a middleware.

Create a file named `middleware.py` and add this middleware to it:

import requests
import pytz

from django.utils import timezone


class UserTimezoneMiddleware(object):
def process_request(self, request):
try:
freegeoip_response = requests.get('http://freegeoip.net/json/')
freegeoip_response_json = freegeoip_response.json()
user_time_zone = freegeoip_response_json['time_zone']
timezone.activate(pytz.timezone(user_time_zone))
except:
pass
return None


Add the `UserTimezoneMiddleware` class to settings.py `MIDDLEWARE_CLASSES` variable.


Now you can get the date/time based on user's timezone:
timezone.localtime(timezone.now()
timezone.localtime(settings_.under_construction_until)

--------------------------------------------------------

+Timestamp from datetime field (Sept. 5, 2016, 1:05 a.m.)

You can do it in template or in view.

Template:
------------

{% now "U" %}
{{ value|date:"U" }}

------------------------------------------------------------------

View:
-------

from django.utils.dateformat import format
format(mymodel.mydatefield, 'U')
OR
import time
time.mktime(mydate.timetuple())

+Manually create a POST/GET QueryDict from a dictionary (Aug. 27, 2016, 3:11 a.m.)

from django.http import QueryDict, MultiValueDict

get_data = {'p_type': request.GET['p_type'], 'facilities': request.GET.getlist('facilities')}
OR
get_data = dict(request.GET.iteritems())

qdict = QueryDict('', mutable=True)
qdict.update(MultiValueDict({'facilities': get_data['facilities']}))
qdict.update(post_data)
request.POST = qdict

+Django Dumpdata Field (Aug. 26, 2016, 3:43 a.m.)

https://github.com/bitmazk/django-dumpdata-field

1- pip install django-dumpdata-field

2-
INSTALLED_APPS = (
'dumpdata_field',
)

3- dumpdata_field facemelk.province --fields=id,province_name > /home/mohsen/Projects/facemelk/facemelk/fixtures/provinces_fields.json

+Ajax File Upload (Aug. 22, 2016, 10:20 p.m.)

<form action="{% url 'glasses:upload-face' %}" method="POST" id="upload-face-form" enctype="multipart/form-data"> {% csrf_token %}
<input type="file" id="upload-face" name="face" />
</form>

-------------------------------------------------------

$('#upload-face').change(function() {
var form = $('#upload-face-form');
var form_data = new FormData(form[0]);
$.ajax({
type: form.attr('method'),
url: form.attr('action'),
data: form_data,
contentType: false,
cache: false,
processData: false,
dataType: 'json',
success: function(image) {

}, error: function(error) {

}
});
});

-------------------------------------------------------

def upload_face(request):
if request.is_ajax():
image = request.FILES.get('face')
if image:
face = open('face.jpg', 'wb')
for chunk in image.chunks():
face.write(chunk)
face.close()
return JsonResponse({'hi': 'hi'})
else:
return HttpResponseRedirect(reverse('home'))

-------------------------------------------------------

+Django Grappelli (May 16, 2016, 4:04 a.m.)

Official Website:

http://grappelliproject.com/

-------------------------------------------------

Documentation

https://django-grappelli.readthedocs.io/en/latest/

-------------------------------------------------

Installation:

pip install django-grappelli

-------------------------------------------------

Setup:

1-
INSTALLED_APPS = (
'grappelli',
'django.contrib.admin',
)


2-Add URL-patterns:
urlpatterns = [
url(r'^grappelli/', include('grappelli.urls')),
url(r'^admin/', include(admin.site.urls)),
]

3-Add the request context processor (needed for the Dashboard and the Switch User feature):
TEMPLATES = [
{
...
'OPTIONS': {
'context_processors': [
...
'django.template.context_processors.request',
...
],
},
},
]

4-Collect the media files:
python manage.py collectstatic

-------------------------------------------------

Custmoization:

http://django-grappelli.readthedocs.io/en/latest/customization.html

-------------------------------------------------

Dashboard Setup:

http://django-grappelli.readthedocs.io/en/latest/dashboard_setup.html

-------------------------------------------------

Third Party Applications:

http://django-grappelli.readthedocs.io/en/latest/thirdparty.html

+Views - Receive and parse JSON data from a request using django-cors-headers (May 4, 2016, 3:19 a.m.)

import json

from django.views.decorators.csrf import csrf_exempt


@csrf_exempt
def update_note(request):
request_json_data = bytes.decode(request.body)
request_data = json.loads(request_json_data)
print(request_data)

------------------------------------------------------------------

You need to install a plugin too:
https://github.com/ottoyiu/django-cors-headers

1- pip install django-cors-headers


2-
INSTALLED_APPS = (
...
'corsheaders',
...
)


3-
MIDDLEWARE = [ # Or MIDDLEWARE_CLASSES on Django < 1.10
...
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
...
]


4-
CORS_ORIGIN_WHITELIST = (
'http://localhost:8000',
)

+Internationalization (May 2, 2016, 10:56 p.m.)

urls.py:

from django.conf.urls.i18n import i18n_patterns

urlpatterns += i18n_patterns()

----------------------------------------------------------------------

settings.py:

MIDDLEWARE = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)

----------------------------------------------------------------------

And finally in a context_processors.py file, add some snippet like this:

def change_language(request):
if '/admin/' not in request.get_full_path():
if '/fa/' not in request.get_full_path():
activate('en')
else:
activate('fa')
return {}
else:
return {}

----------------------------------------------------------------------

{% get_language_info for LANGUAGE_CODE as lang %}
{% get_language_info for "pl" as lang %}

You can then access the information:

Language code: {{ lang.code }}<br />
Name of language: {{ lang.name_local }}<br />
Name in English: {{ lang.name }}<br />
Bi-directional: {{ lang.bidi }}
Name in the active language: {{ lang.name_translated }}

There are also simple filters available for convenience:
{{ LANGUAGE_CODE|language_name }} (“German”)
{{ LANGUAGE_CODE|language_name_local }} (“Deutsch”)
{{ LANGUAGE_CODE|language_bidi }} (False)
{{ LANGUAGE_CODE|language_name_translated }} (“německy”, when active language is Czech)

<form action="{% url 'set_language' %}" method="post">{% csrf_token %}
<input name="next" type="hidden" value="{{ redirect_to }}" />
<select name="language">
{% get_current_language as LANGUAGE_CODE %}
{% get_available_languages as LANGUAGES %}
{% get_language_info_list for LANGUAGES as languages %}
{% for language in languages %}
<option value="{{ language.code }}"{% if language.code == LANGUAGE_CODE %} selected="selected"{% endif %}>
{{ language.name_local }} ({{ language.code }})
</option>
{% endfor %}
</select>
<input type="submit" value="Go" />
</form>


from django.utils import translation
user_language = 'fr'
translation.activate(user_language)
request.session[translation.LANGUAGE_SESSION_KEY] = user_language


from django.http import HttpResponse

def hello_world(request, count):
if request.LANGUAGE_CODE == 'de-at':
return HttpResponse("You prefer to read Austrian German.")
else:
return HttpResponse("You prefer to read another language.")

----------------------------------------------------------------------

from django.conf import settings
from django.utils import translation

class ForceLangMiddleware:

def process_request(self, request):
request.LANG = getattr(settings, 'LANGUAGE_CODE', settings.LANGUAGE_CODE)
translation.activate(request.LANG)
request.LANGUAGE_CODE = request.LANG

----------------------------------------------------------------------

+Admin - Access ModelForm properties (April 23, 2016, 9:09 a.m.)

def __init__(self, *args, **kwargs):
initial = kwargs.get('initial', {})
initial['material'] = 'Test'
kwargs['initial'] = initial
super(ArtefactForm, self).__init__(*args, **kwargs)

-----------------------------------

for field in self.fields.items():
print(field[0]) # Prints field names
print(field[1].label) # Prints field labels

+View - Replace/Populate POST data (April 19, 2016, 11:38 a.m.)

If the request was the result of a Django form submission, then it is reasonable for POST being immutable to ensure the integrity of the data between the form submission and the form validation. However, if the request was not sent via a Django form submission, then POST is mutable as there is no form validation.

mutable = request.POST._mutable
request.POST._mutable = True
request.POST['some_data'] = 'test data'
request.POST._mutable = mutable

----------------------------------------------------------------

In an HttpRequest object, the GET and POST attributes are instances of django.http.QueryDict, a dictionary-like class customized to deal with multiple values for the same key. This is necessary because some HTML form elements, notably <select multiple>, pass multiple values for the same key.

The QueryDicts at request.POST and request.GET will be immutable when accessed in a normal request/response cycle. To get a mutable version you need to use .copy().

----------------------------------------------------------------

request.POST = request.POST.copy()
request.POST['some_key'] = 'some_value'

----------------------------------------------------------------

Methods

QueryDict implements all the standard dictionary methods because it’s a subclass of dictionary. Exceptions are outlined here:

QueryDict.__init__(query_string=None, mutable=False, encoding=None)[source]

Instantiates a QueryDict object based on query_string.

>>> QueryDict('a=1&a=2&c=3')
<QueryDict: {'a': ['1', '2'], 'c': ['3']}>

If query_string is not passed in, the resulting QueryDict will be empty (it will have no keys or values).

Most QueryDicts you encounter, and in particular those at request.POST and request.GET, will be immutable. If you are instantiating one yourself, you can make it mutable by passing mutable=True to its __init__().

Strings for setting both keys and values will be converted from encoding to unicode. If encoding is not set, it defaults to DEFAULT_CHARSET.

QueryDict.__getitem__(key)

Returns the value for the given key. If the key has more than one value, __getitem__() returns the last value. Raises django.utils.datastructures.MultiValueDictKeyError if the key does not exist. (This is a subclass of Python’s standard KeyError, so you can stick to catching KeyError.)

QueryDict.__setitem__(key, value)[source]

Sets the given key to [value] (a Python list whose single element is value). Note that this, as other dictionary functions that have side effects, can only be called on a mutable QueryDict (such as one that was created via copy()).

QueryDict.__contains__(key)

Returns True if the given key is set. This lets you do, e.g., if "foo" in request.GET.

QueryDict.get(key, default=None)

Uses the same logic as __getitem__() above, with a hook for returning a default value if the key doesn’t exist.

QueryDict.setdefault(key, default=None)[source]

Just like the standard dictionary setdefault() method, except it uses __setitem__() internally.

QueryDict.update(other_dict)

Takes either a QueryDict or standard dictionary. Just like the standard dictionary update() method, except it appends to the current dictionary items rather than replacing them. For example:

>>> q = QueryDict('a=1', mutable=True)
>>> q.update({'a': '2'})
>>> q.getlist('a')
['1', '2']
>>> q['a'] # returns the last
['2']

QueryDict.items()

Just like the standard dictionary items() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.items()
[('a', '3')]

QueryDict.iteritems()

Just like the standard dictionary iteritems() method. Like QueryDict.items() this uses the same last-value logic as QueryDict.__getitem__().

QueryDict.iterlists()

Like QueryDict.iteritems() except it includes all values, as a list, for each member of the dictionary.

QueryDict.values()

Just like the standard dictionary values() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.values()
['3']

QueryDict.itervalues()

Just like QueryDict.values(), except an iterator.

In addition, QueryDict has the following methods:

QueryDict.copy()[source]

Returns a copy of the object, using copy.deepcopy() from the Python standard library. This copy will be mutable even if the original was not.

QueryDict.getlist(key, default=None)

Returns the data with the requested key, as a Python list. Returns an empty list if the key doesn’t exist and no default value was provided. It’s guaranteed to return a list of some sort unless the default value provided is not a list.

QueryDict.setlist(key, list_)[source]

Sets the given key to list_ (unlike __setitem__()).

QueryDict.appendlist(key, item)[source]

Appends an item to the internal list associated with key.

QueryDict.setlistdefault(key, default_list=None)[source]

Just like setdefault, except it takes a list of values instead of a single value.

QueryDict.lists()

Like items(), except it includes all values, as a list, for each member of the dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.lists()
[('a', ['1', '2', '3'])]

QueryDict.pop(key)[source]

Returns a list of values for the given key and removes them from the dictionary. Raises KeyError if the key does not exist. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.pop('a')
['1', '2', '3']

QueryDict.popitem()[source]

Removes an arbitrary member of the dictionary (since there’s no concept of ordering), and returns a two value tuple containing the key and a list of all values for the key. Raises KeyError when called on an empty dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.popitem()
('a', ['1', '2', '3'])

QueryDict.dict()

Returns dict representation of QueryDict. For every (key, list) pair in QueryDict, dict will have (key, item), where item is one element of the list, using same logic as QueryDict.__getitem__():

>>> q = QueryDict('a=1&a=3&a=5')
>>> q.dict()
{'a': '5'}

QueryDict.urlencode(safe=None)[source]

Returns a string of the data in query-string format. Example:

>>> q = QueryDict('a=2&b=3&b=5')
>>> q.urlencode()
'a=2&b=3&b=5'

Optionally, urlencode can be passed characters which do not require encoding. For example:

>>> q = QueryDict(mutable=True)
>>> q['next'] = '/a&b/'
>>> q.urlencode(safe='/')
'next=/a%26b/'

+Admin - Hide fields dynamically (April 11, 2016, 7:07 p.m.)

def get_fields(self, request, obj=None):
fields = admin.ModelAdmin.get_fields(self, request)
if settings.DEBUG:
return fields
else:
return ('parent', 'name_en', 'name_fa', 'content_en', 'content_fa', 'ordering',
'languages', 'header_image', 'project_thumbnail')

+Error ==> Permission denied when trying to access database after restore (migration) (April 10, 2016, 10:47 p.m.)

Enter the commands in postgresql shell:
psql mohsen_notesdb -c "GRANT ALL ON ALL TABLES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL SEQUENCES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL FUNCTIONS IN SCHEMA public to mohsen_notes;"

+Admin - Reisze Image Signal (April 5, 2016, 11:51 a.m.)

Create a file `resize_image.py` with this content:

from PIL import Image

from django.conf import settings


def resize_image(sender, instance, created, **kwargs):
if instance.position == 't':
width = settings.TOP_ADS_WIDTH
height = settings.TOP_ADS_HEIGHT
else:
width = settings.BOTTOM_ADS_WIDTH
height = settings.BOTTOM_ADS_HEIGHT

img = Image.open(instance.image.path)
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')

--------------------------------------------------------------------------------------------

After model definition in your models.py file, import `resize_image` and:
models.signals.post_save.connect(resize_image, sender=TheModel)

+Admin - Hide model in admin dynamically (Feb. 29, 2016, 9:50 a.m.)

class AccessoryCategoryAdmin(admin.ModelAdmin):
def get_model_perms(self, request):
perms = admin.ModelAdmin.get_model_perms(self, request)
if request.user.username == settings.SECOND_ADMIN:
return {}
return perms

+Admin - Display readonly fields based on conditions (Feb. 28, 2016, 3:02 p.m.)

class AccessoryAdmin(admin.ModelAdmin):
list_display = ('name', 'category', 'price', 'quantity', 'ordering', 'display')
list_filter = ('category', 'display')

def get_readonly_fields(self, request, obj=None):
if request.user.username == settings.SECOND_ADMIN:
readonly_fields = ('category', 'name', 'image', 'price', 'main_image', 'description', 'ordering', 'url_name')
return readonly_fields
else:
return self.readonly_fields

+Form - How to add a star after fields (Feb. 27, 2016, 10:47 p.m.)

Add the `required_css_class` property to Form class like this:

class ProfileForm(forms.Form):
required_css_class = 'required'

first_name = forms.CharField(label=_('first name'), max_length=30)
last_name = forms.CharField(label=_('last name'), max_length=30)
cellphone_number = forms.CharField(label=_('cellphone'), max_length=20)


Then use the property `label_tag` of form fields to set the titles:
{{ form.first_name.errors }} {{ form.first_name.label_tag }}
{{ form.last_name.errors }} {{ form.last_name.label_tag }}
{{ form.cellphone_number.errors }} {{ form.cellphone_number.label_tag }}

Use it in CSS to style it or add an asterisk:
<style type="text/css">
.required:after {
content: " *";
color: red;
}
</style>

+Decorators (Jan. 29, 2016, 4:34 p.m.)

Create a python file named `decorators.py` in the app and write your decorators as follows:

def login_required(view_func):
def wrap(request, *args, **kwargs):
if request.user.is_authenticated():
return view_func(request, *args, **kwargs)
else:
return render(request, 'issue_tracker/access_denied.html',
{'login_required': 'yes'})
return wrap

-----------------------------------------------------------

from django.utils.functional import wraps


def can_participate_poll(view):
@wraps(view)
def inner(request, *args, **kwargs):
print(kwargs) # Prints {'qnum': 11, 'qid': 23}
return view(request, *args, **kwargs)
return inner


This will print the args which are passed to the view.

@can_participate_poll
def poll_view(request, qid, qnum):
pass

-----------------------------------------------------------

from django.contrib.auth.decorators import user_passes_test

@user_passes_test(lambda u: u.is_superuser)
def my_view(request):
pass

-----------------------------------------------------------

+Admin - Change Header Title (Jan. 14, 2016, 8:44 p.m.)

In the main urls.py file:

admin.site.site_header = _('YouStone Administration')

+Change app name for admin (Jan. 27, 2016, 11:51 p.m.)

1- Create a python file named `apps.py` in the app:

from django.apps import AppConfig
from django.utils.translation import ugettext_lazy as _

class CourseConfig(AppConfig):
name = 'course'
verbose_name = _('course')

2- Edit the __init__.py file within the app:
default_app_config = 'course.apps.CourseConfig'

+Save File/Image (Dec. 1, 2015, 3:16 p.m.)

import uuid
from PIL import Image as PILImage
import imghdr
import os

from django.conf import settings

from manager.home.models import Image


def save_image(img_file, width=0, height=0):
# Generate a random image name
img_name = uuid.uuid4().hex + '.' + img_file.name.split('.')[-1]

# Saving the picture on disk
img = open(settings.IMG_ROOT + img_name, 'wb')
for chunk in img_file.chunks():
img.write(chunk)
img.close()

img = open(img.name)
# Is the saved image a valid image file!?
if not imghdr.what(img) or imghdr.what(img).lower() not in ['jpg', 'jpeg', 'gif', 'png']:
os.remove(img.name)
return {'is_image': False}
else:
if width or height:
# Resizing the image
pil_img = PILImage.open(img.name)

if pil_img.mode != 'RGB':
pil_img = pil_img.convert('RGB')
pil_img.resize((width, height), PILImage.ANTIALIAS).save(img.name, format='JPEG')

# Saving the image location on the database
img = Image.objects.create(name=img_name)
return {'is_image': True, 'image': img}


def create_unique_file_name(path, file_name):
while os.path.exists(path + file_name):
if '.' in file_name:
file_name = file_name.replace('.', '_.', -1)
else:
file_name += '_'

return file_name

+Custom Middleware Class (Nov. 21, 2015, 10:39 p.m.)

Create a file named `middleware.py` in a module and add your middleware like this:

from django.shortcuts import render

from nespresso.models import Settings


class UnderConstruction:
def process_request(self, request):
settings_ = Settings.objects.all()
if settings_ and settings_[0].under_construction:
return render(request, 'nespresso/under_construction.html')


After defining a middleware, add it to the settings:
MIDDLEWARE_CLASSES = MIDDLEWARE_CLASSES + (
'nespresso.middleware.UnderConstruction',
)


--------------------------------------------------------------

Django 2:

from django.shortcuts import HttpResponseRedirect
from django.urls import reverse


class UnderConstructionMiddleWare:
def __init__(self, get_response):
self.get_response = get_response

def __call__(self, request):
response = self.get_response(request)
# Do the conditions here
return HttpResponseRedirect(reverse('under_construction:home'))



In settings.py:
Add the name to MIDDLEWARE

--------------------------------------------------------------

+Add Action Form to Action (Oct. 13, 2015, 10:48 a.m.)

from django.contrib.admin.helpers import ActionForm
from django.contrib import messages


class ChangeMembershipTypeForm(ActionForm):
MEMBERSHIP_TYPE = (
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
)
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)



class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
queryset.update(membership_type=membership_type)
self.message_user(request, _('Successfully updated membership type for selected rows.'), messages.SUCCESS)
change_membership_type.short_description = _('Change Membership Type')

+Admin - Hide action (Oct. 8, 2015, 10:56 a.m.)

class MyAdmin(admin.ModelAdmin):

def has_delete_permission(self, request, obj=None):
return False

def get_actions(self, request):
actions = super(MyAdmin, self).get_actions(request)
if 'delete_selected' in actions:
del actions['delete_selected']
return actions

--------------------------------------------------------------------

def get_actions(self, request):
actions = admin.ModelAdmin.get_actions(self, request)
if request.user.username == settings.SECOND_ADMIN:
return []
else:
return actions

+Model - Disable the Add and / or Delete action for a specific model (March 10, 2016, 11:02 p.m.)

def has_add_permission(self, request):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return
else:
return perms

def has_delete_permission(self, request, obj=None):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return
else:
return perms

+URLS - Redirect (Oct. 6, 2015, 11:27 a.m.)

from django.views.generic import RedirectView

url(r'^$', RedirectView.as_view(url='/online-calls/'), name='home'),

+Send HTML email using send_mail (Sept. 28, 2015, 4:48 p.m.)

from django.template import loader
from django.core.mail import send_mail


html = loader.render_to_string('nespresso/admin_order_notification.html', {'order': order})
send_mail('Nespresso New Order from - %s' % order.customer.user.get_full_name(),
'',
'mail@buynespresso.ir',
OrderingEmail.objects.all().values_list('email', flat=True),
html_message=html)

+Admin - Many to Many Inline (Sept. 28, 2015, 10:23 a.m.)

class OrderInline(admin.TabularInline):
model = Order.items.through


class OrderItemAdmin(admin.ModelAdmin):
inlines = [OrderInline]


class OrderAdmin(admin.ModelAdmin):
list_display = ('customer', 'get_order_url',)
exclude = ('items',)
inlines = [OrderInline]


admin.site.register(OrderItem)
admin.site.register(Order, OrderAdmin)

+Change list display link in django admin (Sept. 27, 2015, 5:47 p.m.)

In models.py file:

class Order(models.Model):
customer = models.ForeignKey(Customer, null=True, on_delete=models.SET_NULL)
total_price = models.PositiveIntegerField()
items = models.ManyToManyField(OrderItem)
date_time = models.DateTimeField(default=now)

def __str__(self):
return '%s' % self.customer

def get_order_url(self):
return '<a href="%s" target="_blank">%s - %s</a>' % (reverse('customer:order', args=(self.pk,)),
self.customer.user.get_full_name(),
self.date_time.strftime('%D--%H:%M'))
# In django prior to version 2.0:
get_order_url.allow_tags = True

# In django after version 2.0:
from django.utils.safestring import mark_safe # At the top of your models.py file
mark_safe('<a href="#"></a>')

----------------------------------------------------------------------

And then in admin.py file:

class OrderAdmin(admin.ModelAdmin):
list_display = ('get_order_url',)

+Admin - Override User Form (Sept. 15, 2015, 2:13 p.m.)

from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from django import forms

from .models import Supervisor


class SupervisorChangeForm(UserChangeForm):
class Meta(UserChangeForm.Meta):
model = Supervisor


class SupervisorCreationForm(UserCreationForm):
class Meta(UserCreationForm.Meta):
model = Supervisor

def clean_username(self):
username = self.cleaned_data['username']
try:
Supervisor.objects.get(username=username)
except Supervisor.DoesNotExist:
return username
raise forms.ValidationError(self.error_messages['duplicate_username'])


class SupervisorAdmin(UserAdmin):
form = SupervisorChangeForm
add_form = SupervisorCreationForm
fieldsets = (
(None, {'fields': ('username', 'password')}),
('Personal info', {'fields': ('first_name', 'last_name', 'email')}),
('Permissions', {'fields': ('is_active',)}),
(None, {'fields': ('allowed_online_calls',)}),
)
exclude = ['user_permission']


admin.site.register(Supervisor, SupervisorAdmin)

------------------------------------------------------------------------------------

If you need to override the form fields:

class SupervisorChangeForm(UserChangeForm):

def __init__(self, *args, **kwargs):
super(UserChangeForm, self).__init__(*args, **kwargs)
self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(
queryset=Choices.objects.filter(choice='customer'),
widget=forms.CheckboxSelectMultiple())

class Meta(UserChangeForm.Meta):
model = Supervisor

+Models - Ranges of IntegerFields (Aug. 21, 2015, 10:22 p.m.)

BigIntegerField:
A 64 bit integer, much like an IntegerField except that it is guaranteed to fit numbers from -9223372036854775808 to 9223372036854775807

-------------------------------------------------------------

IntegerField:
Values from -2147483648 to 2147483647 are safe in all databases supported by Django.

-------------------------------------------------------------

PositiveIntegerField:
Like an IntegerField, but must be either positive or zero (0). Values from 0 to 2147483647 are safe in all databases supported by Django. The value 0 is accepted for backward compatibility reasons.

-------------------------------------------------------------

PositiveSmallIntegerField:
Like a PositiveIntegerField, but only allows values under a certain (database-dependent) point. Values from 0 to 32767 are safe in all databases supported by Django.

-------------------------------------------------------------

SmallIntegerField:
Like an IntegerField, but only allows values under a certain (database-dependent) point. Values from -32768 to 32767 are safe in all databases supported by Django.

-------------------------------------------------------------

+Admin - Adding Action to Export/Download CSV file (Aug. 24, 2015, 1:04 p.m.)

class VirtualOfficeAdmin(admin.ModelAdmin):
actions = ['download_csv']
list_display = ('persian_name', 'english_name', 'office_type', 'active')
list_filter = ('office_type', 'active')

def download_csv(self, request, queryset):
import csv
from django.http import HttpResponse
import StringIO
from django.utils.encoding import smart_str

f = f = StringIO.StringIO()
writer = csv.writer(f)
writer.writerow(
["owner", "office type", "persian name", "english name", "cellphone number", "phone number", "address"])
for s in queryset:
owner = smart_str(s.owner.get_full_name())
persian_name = smart_str(s.persian_name)

# Office Type
office_type = s.office_type
if office_type == 're':
office_type = smart_str(ugettext('Real Estate'))
elif office_type == 'en':
office_type = smart_str(ugettext('Engineer'))
elif office_type == 'ar':
office_type = smart_str(ugettext('Architect'))
else:
office_type = office_type

writer.writerow(
[owner, office_type, persian_name, s.english_name, '09' + s.owner.username, s.phone_number, s.address])

f.seek(0)
response = HttpResponse(f, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=stat-info.csv'
return response

download_csv.short_description = _("Download CSV file for selected stats.")
---------------------------------------------------------------------------------------------
from django.contrib.admin.helpers import ActionForm
from django import forms
from django.utils.translation import ugettext_lazy as _
from django.contrib import messages

class ChangeMembershipTypeForm(ActionForm):
MEMBERSHIP_TYPE = (
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
)
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)


class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm
actions = ['change_membership_type']

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
queryset.update(membership_type=membership_type)
self.message_user(request, _('Successfully updated membership type for %d rows') % (queryset.count(),),
messages.SUCCESS)
change_membership_type.short_description = _('Change Membership Type')

+Custom Template Tags & FIlters (April 6, 2016, 2:30 p.m.)

1- Create a module named `templatetags` in an app.

2- Create a py file with a desired name. (I usually choose the app name for this python file name)

3- Write the methods you need, in the python file.

4- There is no need to introduce these methods or files in `settings.py`.

----------------------------------------------------------------------------------------------------

================= Template Filters Examples =================

from django.template import Library


register = Library()


@register.filter
def trim_value(value):
value = str(value)
if value.endswith('.0'):
return value.replace('.0', '')
else:
return value

----------------------------------------------------------------------------------------------------

@register.filter
def get_decimal(value):
if value:
import decimal
return str(decimal.Decimal('{0:.4f}'.format(value)))
else:
return '0'

----------------------------------------------------------------------------------------------------

@register.filter
def get_minutes(total_seconds):
if total_seconds:
return round(total_seconds / 60, 2)
else:
return 0

----------------------------------------------------------------------------------------------------

@register.filter
def get_acd(request):
if request:
minutes = get_minutes(request.session['total_seconds'])
if minutes:
return round(minutes / request.session['total_calls'], 2)
else:
return 0
else:
return 0

----------------------------------------------------------------------------------------------------

@register.filter
def round_values(value, digit):
if digit and digit.isdigit():
return round(value, int(digit))
else:
return value

----------------------------------------------------------------------------------------------------

@register.filter
def calculate_currency_rate(value, invoice):
from decimal import Decimal
if invoice.rate_currency:
return round(Decimal(value) * Decimal(invoice.rate), 2)
else:
return value

----------------------------------------------------------------------------------------------------

================= Template Tags Examples =================

Important Hint:
You can return anything you like from a tag, including a queryset. However, you can't use a tag inside the for tag ; you can only use a variable there (or a variable passed through a filter).

from django.template import Library, Node, TemplateSyntaxError, Variable

from youstone.models import Ad


register = Library()


class AdsNode(Node):
def __init__(self, usage, position, province):
self.usage, self.position, self.province = Variable(usage), Variable(position), Variable(province)

def render(self, context):
usage = self.usage.resolve(context)
position = self.position.resolve(context)
province = self.province.resolve(context)
ads = Ad.objects.filter(active=True, usage=usage)
if position:
ads = ads.filter(position=position)

if province:
print('PROVINCE', province)

context['ads'] = ads

return ''


@register.tag
def get_ads(parser, token):
try:
tag_name, usage, position, province, _as, var_name = token.split_contents()
except ValueError:
raise TemplateSyntaxError(
'get_ads takes 4 positional arguments but %s were given.' % len(token.split_contents()))

if _as != 'as':
raise TemplateSyntaxError('get_ads syntax must be "get_ads <usage> <position> <province> as <var_name>."')

return AdsNode(usage, position, province)

----------------------------------------------------------------------------------------------------

Then you can use the template tag like this in the template:
{% get_ads usage position province as ads %}
{% for ad in ads %}

{% endfor %}

----------------------------------------------------------------------------------------------------

+Resize Image (Aug. 9, 2015, 10:34 p.m.)

Create a python module named resize_image.py and copy & paste this snippet:

---------------------------------------------------------------------------------------------

from PIL import Image

from django.conf import settings


def resize_image(sender, instance, created, **kwargs):
width = settings.SLIDER_WIDTH
height = settings.SLIDER_HEIGHT

img = Image.open(instance.image.path)
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')

Note that resize() returns a resized copy of an image. It doesn't modify the original.
So do not write codes like this:
img.resize((width, height), Image.ANTIALIAS)
img.save(instance.image.path, format='JPEG')

--------------------------------------------------------------------------------------------

In the settings:
# Slider Image Size
SLIDER_WIDTH = 1000
SLIDER_HEIGHT = 600

---------------------------------------------------------------------------------------------

In models.py:
from resize_image import resize_image

class Slider(models.Model):
pass

models.signals.post_save.connect(resize_image, sender=Slider)

--------------------------------------------------------------------------------------------

+Extending User Model using OneToOne relationship (Aug. 5, 2015, 4:43 p.m.)

from django.db.models.signals import post_save
from django.conf import settings


class Customer(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, unique=True, primary_key=True)


def create_customer(sender, instance, created, **kwargs):
if created:
Customer.objects.get_or_create(user=instance)

post_save.connect(create_customer, sender=settings.AUTH_USER_MODEL)

+Admin - Overriding admin ModelForm (Nov. 30, 2015, 3:49 p.m.)

class MachineCompareForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(MachineCompareForm, self).__init__(*args, **kwargs)
self.model_fields = [['field_%s' % title.pk, title.feature, title.pk] for title in CompareTitle.objects.all()]
for field in self.model_fields:
self.base_fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
self.fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
if self.instance.pk:
feature = CompareFeature.objects.filter(machine=self.instance.machine.pk, feature=field[2])
if feature:
self.base_fields[field[0]].initial = feature[0].value
self.fields[field[0]].initial = feature[0].value

def save(self, commit=True):
instance = super(MachineCompareForm, self).save(commit=False)
for field in self.model_fields:
if CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]):
CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]).update(
feature_id=field[2],
machine=self.cleaned_data['machine'],
value=self.cleaned_data[field[0]])
else:
CompareFeature.objects.create(feature_id=field[2],
machine=self.cleaned_data['machine'],
value=self.cleaned_data[field[0]])

if commit:
instance.save()
return instance

class Meta:
model = MachineCompare
exclude = []


class MachineCompareAdmin(admin.ModelAdmin):
form = MachineCompareForm

def get_form(self, request, obj=None, **kwargs):
return MachineCompareForm

---------------------------------------------------------------------------------------------

class SpecialPageAdmin(admin.ModelAdmin):
list_display = ('company', 'url_name', 'active',)
search_fields = ('company__name', 'url_name')
form = SpecialPageForm

def get_form(self, request, obj=None, **kwargs):
return SpecialPageForm



class SpecialPageForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(SpecialPageForm, self).__init__(*args, **kwargs)
for i in range(1, 16):
self.fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))
self.base_fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))

class Meta:
model = SpecialPage
exclude = []

---------------------------------------------------------------------------------------------

+Model - Overriding delete method in model (Nov. 28, 2015, 12:29 p.m.)

from django.db.models.signals import pre_delete
from django.dispatch.dispatcher import receiver

@receiver(pre_delete, sender=MyModel)
def _mymodel_delete(sender, instance, **kwargs):
print "deleting"

+Union of querysets (July 20, 2015, 5:14 p.m.)

import itertools

result = itertools.chain(qs1, qs2, qs3, qs4)

-------------------------------------------------------------------

records = query1 | query2

-------------------------------------------------------------------

+Views - Concatenating querysets and converting to JSON (July 17, 2015, 9:05 p.m.)

from itertools import chain


combined = list(chain(collectionA, collectionB))
json = serializers.serialize('json', combined)

---------------------------------------------------------------------------------------

final_queryset = (queryset1 | queryset2)

+Template - nbsp template tag (Replace usual spaces in string by non breaking spaces) (July 9, 2015, 2:45 a.m.)

from django import template
from django.utils.safestring import mark_safe

register = template.Library()


@register.filter()
def nbsp(value):
return mark_safe("&nbsp;".join(value.split(' ')))
------------------------------------------------------
Usage:
{% load nbsp %}

{{ user.full_name|nbsp }}

OR

{{ note.note|nbsp|linebreaksbr }}

+Views - Delete old uploaded file/image before saving the new one (July 8, 2015, 8:24 p.m.)

import os
from django.conf import settings

try:
os.remove(settings.BASE_DIR + logo.image.name)
logo.delete()
except (OSError, IOError):
pass

+Admin - list_display with a callable (Jan. 3, 2016, 10:17 a.m.)

class ExcelFile(models.Model):
file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' % self.business.title

def get_file_name(self):
return self.file.name.split('/')[1]
get_file_name.short_description = _('File Name')

------------------------------------------------------------------------------------------

class ExcelFileAdmin(admin.ModelAdmin):
list_display = ['get_file_name', 'business']

------------------------------------------------------------------------------------------

def change_order(self):
return '<a href="review/">%s</a>' % _('Edit Order')
change_order.short_description = _('Edit Order')
change_order.allow_tags = True

------------------------------------------------------------------------------------------

+Admin - Hide fields (July 8, 2015, 1:31 p.m.)

from django.contrib import admin

from .models import ExcelFile


class ExcelFileAdmin(admin.ModelAdmin):
exclude = ['companies']


admin.site.register(ExcelFile, ExcelFileAdmin)

+Model - Validators (Jan. 28, 2016, 12:03 a.m.)

from django.core.exceptions import ValidationError

def validate_excel_file(file):
try:
xlrd.open_workbook(file_contents=file.read())
except xlrd.XLRDError:
raise ValidationError(_('%s is not an Excel File') % file.name)


class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])

-------------------------------------------------------------------

from django.core import validators


mobile_number = models.CharField(
_('mobile number'),
blank=True,
max_length=11,
validators=[
validators.RegexValidator(
regex=r'^09[0-9]{9}$',
message=_('Please enter a valid mobile number.')
)
]
)

-------------------------------------------------------------------

+Admin - Allow only one instance of object to be created (July 8, 2015, 12:41 p.m.)

def validate_only_one_instance(obj):
model = obj.__class__
if model.objects.count() > 0 and obj.id != model.objects.get().id:
raise ValidationError(
_('Can only create 1 %s instance') % model.__name__
)

class Settings(models.Model):
banner = models.ImageField(_('banner'), upload_to='images/machines/settings',
help_text=_('The required image size is 960px in 250px.'))

def __str__(self):
return '%s' % _('Settings')

def clean(self):
validate_only_one_instance(self)

--------------------------- ANOTHER ONE ---------------------------------------------

class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' % self.business.title

def clean(self):
model = self.__class__
validation_error = _("Can only create 1 %s instance") % self.business.title
business = model.objects.filter(business=self.business)
# If the user is updating/editing an object
if self.pk:
if business and self.pk != business[0].pk:
raise ValidationError(validation_error)
# If the user is inserting/creating an object
else:
if business:
raise ValidationError(validation_error)

+Speeding Up Django Links (June 18, 2015, 12:41 p.m.)

http://vincent.is/speeding-up-django-postgres/

+Django Analytical (June 7, 2015, 4:52 p.m.)

1-easy_install django-analytical

2-
INSTALLED_APPS = [
...
'analytical',
...
]

3-In the base.html
{% load analytical %}
<!DOCTYPE ... >
<html>
<head>
{% analytical_head_top %}

...

{% analytical_head_bottom %}
</head>
<body>
{% analytical_body_top %}

...

{% analytical_body_bottom %}
</body>
</html>

4-Create an account on this site:
http://clicky.com/66453175
I have already registered: Username is Mohsen_Hassani and the password MohseN4301

5-There are some javascript codes which should be taken from clicky.com to you template. Those are like:

This should be before the </body> </html> tags:
<script src="//static.getclicky.com/js" type="text/javascript"></script>
<script type="text/javascript">try{ clicky.init(100851091); }catch(e){}</script>
<noscript><p><img alt="Clicky" width="1" height="1" src="//in.getclicky.com/100851091ns.gif" /></p></noscript>


+Templates - Do Mathematic (Jan. 14, 2016, 2:14 p.m.)

http://slacy.com/blog/2010/07/using-djangos-widthratio-template-tag-for-multiplication-division/

Using Django’s widthratio template tag for multiplication & division.

I find it a bit odd that Django has a template filter for adding values, but none for multiplication and division. It’s fairly straightforward to add your own math tags or filters, but why bother if you can use the built-in one for what you need?

Take a closer look at the widthratio template tag. Given {% widthratio a b c %} it computes (a/b)*c

So, if you want to do multiplication, all you have to do is pass b=1, and the result will be a*c.

Of course, you can do division by passing c=1. (a=1 would also work, but has possible rounding side effects)

Note: The results are rounded to an integer before returning, so this may have marginal utility for many cases.

So, in summary:

to compute A*B: {% widthratio A 1 B %}
to compute A/B: {% widthratio A B 1 %}

And, since add is a filter and not a tag, you can always to crazy stuff like:

compute A^2: {% widthratio A 1 A %}
compute (A+B)^2: {% widthratio A|add:B 1 A|add:B %}
compute (A+B) * (C+D): {% widthratio A|add:B 1 C|add:D %}

+URLS - Allow entering dot (.) in url pattern (Dec. 2, 2014, 10:03 p.m.)

[-\w.]+

+Change the value of QuerySet (Nov. 18, 2014, 2:17 a.m.)

If you change the value of QuerySet you will get an error:
“This QueryDict instance is immutable”

So this is how you should change the value of it: (the whole of it or any item inside)
mutable = request.POST._mutable
request.POST._mutable = True
request.session['search_criteria']['region'] = rid
request.session.save()
request.POST = request.session['search_criteria']
request.POST._mutable = mutable

+Templates - Conditional Extend (Sept. 22, 2014, 11:45 a.m.)

{% extends supervising|yesno:"supervising/tasks.html,desktop/tasks_list.html" %}

{% extends variable %} uses the value of variable. If the variable evaluates to a string, Django will use that string as the name of the parent template. If the variable evaluates to a Template object, Django will use that object as the parent template.

+Adding CSS class in a ModelForm (Sept. 13, 2014, 1:15 a.m.)

self.fields['specie'].widget.attrs['class'] = 'autocomplete'

+Models - Overriding save method (Aug. 21, 2014, 1:03 p.m.)

from tastypie.utils.timezone import now
from django.contrib.auth.models import User
from django.db import models
from django.utils.text import slugify


class Entry(models.Model):
user = models.ForeignKey(User)
pub_date = models.DateTimeField(default=now)
title = models.CharField(max_length=200)
slug = models.SlugField()
body = models.TextField()

def __unicode__(self):
return self.title

def save(self, *args, **kwargs):
# For automatic slug generation.
if not self.slug:
self.slug = slugify(self.title)[:50]

return super(Entry, self).save(*args, **kwargs)

+auto_now / auto_now_add (Aug. 21, 2014, 1:02 p.m.)

created_at = models.DateTimeField(_('created at'), auto_now_add=True)
updated_at = models.DateTimeField(_('updated at'), auto_now=True)

------------------------------------------------------------------------------

class Blog(models.Model):
title = models.CharField(max_length=100)
added = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)

auto_now_add tells Django that when you add a new row, you want the current date & time added. auto_now tells Django to add the current date & time will be added EVERY time the record is saved.

------------------------------------------------------------------------------

auto_now

Automatically set the field to now every time the object is saved. Useful for “last-modified” timestamps.

The field is only automatically updated when calling Model.save(). The field isn’t updated when making updates to other fields in other ways such as QuerySet.update(), though you can specify a custom value for the field in an update like that.

------------------------------------------------------------------------------

auto_now_add

Automatically set the field to now when the object is first created. Useful for the creation of timestamps.

Even if you set a value for this field when creating the object, it will be ignored. If you want to be able to modify this field, set the following instead of auto_now_add=True:

- For DateField: default=date.today -> from datetime.date.today()
- For DateTimeField: default=timezone.now -> from django.utils.timezone.now()

------------------------------------------------------------------------------

+Query - Call a field name by dynamic values (Aug. 21, 2014, 12:58 p.m.)

properties = Properties.objects.filter(**{'%s__age_status' % p_type: request.POST['age_status']})

+Settings - Set a settings for shell (Aug. 21, 2014, 12:56 p.m.)

DJANGO SETTINGS MODULE for shell:
python manage.py shell --settings=nimkatonilne.settings

+Admin - Deleting the file/image on deleting an object (Aug. 21, 2014, 12:54 p.m.)

1-Create a file named `clean_up.py` with the following contents:

import os

from django.conf import settings


def clean_up(sender, instance, *args, **kwargs):
for field in sender._meta.get_fields():
field_types = ['FileBrowseField', 'ImageField', 'FileField']
if field.__class__.__name__ in field_types:
try:
os.remove(settings.MEDIA_ROOT + str(getattr(instance, field.name)))
except (OSError, IOError):
pass
--------------------------------------------------------------------------------------------
2- Open the models.py file:

Import the `clean_up` function from the `clean_up` module and add the following line at the bottom of each model having a FileField or ImageField or FileBrowseField:

models.signals.post_delete.connect(clean_up, sender=Ads)

+URLS - Redirect to a URL in urls.py (Aug. 21, 2014, 12:53 p.m.)

from django.views.generic import RedirectView
from django.core.urlresolvers import reverse_lazy

(r'^one/$', RedirectView.as_view(url='/another/')),

OR

url(r'^some-page/$', RedirectView.as_view(url=reverse_lazy('my_named_pattern'))),

+Forms - Overriding and manipulating fields (Nov. 30, 2015, 12:35 p.m.)

class CheckoutForm(forms.ModelForm):

def __init__(self, request, *args, **kwargs):
super(CheckoutForm, self).__init__(*args, **kwargs)
self.request = request
print(request.user)


class Meta:
model = Address
exclude = ('fax_number',)

-------------------------------------------------

class ProfileForm(forms.Form):
required_css_class = 'required'

-------------------------------------------------

def __init__(self, request, *args, **kwargs):
super(InstituteRegistrationForm, self).__init__(*args, **kwargs)
self.request = request

-------------------------------------------------

if request.user.cellphone:
self.fields['cell_phone_number'].widget.attrs['readonly'] = 'true'

-------------------------------------------------

if request.user.email:
self.fields['email'].widget.attrs['readonly'] = 'true'

-------------------------------------------------

self.fields['city'].queryset = City.objects.filter(province__allow_delete=False)
self.fields['city'].initial = '1'

-------------------------------------------------

self.fields['first_name'].required = True
self.fields['first_name'].widget.attrs['required'] = True

-------------------------------------------------

for field in self.fields.values():
field.widget.attrs['required'] = True
field.required = True

-------------------------------------------------

self.fields['national_team'].empty_label = None

-------------------------------------------------

self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(
queryset=Choices.objects.filter(choice='customer'),
widget=forms.CheckboxSelectMultiple())

-------------------------------------------------

Hide a field:
self.fields['state'].widget = forms.HiddenInput()

-------------------------------------------------

class UpdateShare(forms.ModelForm):
class Meta:
model = ManualEntries
exclude = ['dt']
widgets = {
'description': forms.Textarea(attrs={'rows': 3}),
}

-------------------------------------------------

class QuestionnaireForm(forms.ModelForm):
class Meta:
model = Questionnaire
fields = ['code', 'title', 'grades', 'description', 'enable']
widgets = {
'grades': forms.CheckboxSelectMultiple
}

-------------------------------------------------

self.fields['amount'].help_text = 'AAA'

-------------------------------------------------

Change ModelChoiceField items text:

self.fields['parent'].label_from_instance = lambda obj: obj.other_name

-------------------------------------------------

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

-------------------------------------------------

When passing instance in render, like:
{'form': ProfileForm(instance=request.user)}

if you needed to change values in __init__ of ModelForm use "self.initial":

self.initial['first_name'] = 'aa'

-------------------------------------------------

class CertificateForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
date = self['date'].value()
if date and not isinstance(date, str):
self.initial['date'] = '-'.join([str(x) for x in list(get_persian_date(date).values())])

-------------------------------------------------

Change max_length validator error message:

caller_id.validators[-1].message = _('The text is too long.')

-------------------------------------------------

Get the label of a choice in ChoiceField

dict(form.fields['city'].choices)[data['city']]

-------------------------------------------------

Validate a form field:

class CDRSearchForm(forms.Form):
from_date = forms.CharField(label=_('from date'), max_length=50)
to_date = forms.CharField(label=_('to date'), max_length=50)

def clean_to_date(self):
from_date = self.cleaned_data['from_date']
to_date = self.cleaned_data['to_date']

return self.cleaned_data['to_date']

-------------------------------------------------

Docker
+Docker Swarm - Nodes (Sept. 23, 2020, 8:28 p.m.)

A docker swarm is comprised of a group of physical or virtual machines operating in a cluster. When a machine joins the cluster, it becomes a node in that swarm. The docker swarm function recognizes three different types of nodes, each with a different role within the docker swarm ecosystem:


- Manager Node
The primary function of manager nodes is to assign tasks to worker nodes in the swarm. Manager nodes also help to carry out some of the managerial tasks needed to operate the swarm. Docker recommends a maximum of seven manager nodes for a swarm.


- Leader Node
When a cluster is established, the Raft consensus algorithm is used to assign one of them as the "leader node". The leader node makes all of the swarm management and task orchestration decisions for the swarm. If the leader node becomes unavailable due to an outage or failure, a new leader node can be elected using the Raft consensus algorithm.


- Worker Node
In a docker swarm with numerous hosts, each worker node functions by receiving and executing the tasks that are allocated to it by manager nodes. By default, all manager modes are also worker nodes and are capable of executing tasks when they have the resources available to do so.

+Docker Swarm - Mode Services (Sept. 23, 2020, 8:26 p.m.)

Docker Swarm has two types of services: replicated and global.

Replicated services: Swarm mode replicated services functions by you specifying the number of replica tasks for the swarm manager to assign to available nodes.

Global services: Global services function by using the swam manager to schedule one task to each available node that meets the services constraints and resource requirements.

+Handy Commands (Sept. 19, 2020, 3:38 p.m.)

docker container ls -a

docker container rm <container_id>
docker container rm $(docker ps -a -q)

docker container stop <container_id>
docker container stop $(docker ps -a -q)

docker image ls
docker image rm <image_id>

docker image rm $(docker image ls -q)

--------------------------------------------------------------------

docker-compose -f docker-compose.yml up --build

--------------------------------------------------------------------

+Docker Compose compatibility matrix (Sept. 12, 2020, 11:38 a.m.)

https://docs.docker.com/compose/compose-file/


Compose file format Docker Engine release
3.8 19.03.0+
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+

+Docker Service (Aug. 9, 2020, 5:33 p.m.)

To deploy an application image when Docker Engine is in swarm mode, you create a service. Frequently a service is an image for a microservice within the context of some larger application.

Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.


When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including:

- The port where the swarm makes the service available outside the swarm
- An overlay network for the service to connect to other services in the swarm
- CPU and memory limits and reservations
- A rolling update policy
- The number of replicas of the image to run in the swarm

+docker run vs docker service (Aug. 9, 2020, 5:29 p.m.)

The docker run command creates and starts a container on the local docker host.

A docker "service" is one or more containers with the same configuration running under the docker's swarm mode.

The difference is that you now have orchestration. That orchestration restarts your container if it stops, finds the appropriate node to run the container on based on your constraints, scale your service up or down, allows you to use the mesh networking and a VIP to discover your service, and perform rolling updates to minimize the risk of an outage during a change to your running application.

-----------------------------------------------------------------------------

+Docker Swarm - Drain a node (July 30, 2020, 9:53 a.m.)

The swarm manager can assign tasks to any ACTIVE node.

DRAIN availability prevents a node from receiving new tasks from the swarm manager.
It means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability.

Setting a node to DRAIN does not remove standalone containers from that node, such as those created with docker run, docker-compose up, or the Docker Engine API. A node’s status, including DRAIN, only affects the node’s ability to schedule swarm service workloads.

----------------------------------------------------------

To drain a node run the following command:

docker node update --availability drain worker1

----------------------------------------------------------

When you set the node back to Active availability, it can receive new tasks:

- During a service update to scale up
- During a rolling update
- When you set another node to Drain availability
- When a task fails on another active node

----------------------------------------------------------

+Docker Swarm - Delete a service (July 30, 2020, 12:49 a.m.)

docker service rm helloworld



Verify that the swarm manager removed the service:
docker service inspect helloworld
[]
Error: no such service: helloworld


docker ps

+Docker Swarm - Scale a Service (July 29, 2020, 11:32 p.m.)

Change the desired state of the service running in the swarm:
docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS>


For example:
docker service scale helloworld=5


docker service ps <SERVICE-ID>
docker service ps helloworld

+Docker Swarm - Deploy a Service (July 29, 2020, 10:53 p.m.)

docker service create --replicas 1 --name helloworld alpine ping docker.com

docker service ls

docker service inspect --pretty <SERVICE-ID>

------------------------------------------------------

To see which nodes are running the service:

docker service ps <SERVICE-ID>

------------------------------------------------------

+Docker Swarm - Set up (July 29, 2020, 8:23 p.m.)

1- Create three virtual machines (one Manager and two Workers).
Set proper hostnames because the nodes are named after their hostnames.
Install Docker using my notes, on all the three servers.


2- Run the following command on the Manager server to create a new swarm:
docker swarm init --advertise-addr <ip_address>

The --advertise-addr flag configures the manager node to publish its address as <the_ip_address>


3- Add nodes to the swarm (Copy the command from the Manager machine):
docker swarm join --token SWMTKN-1-1....... <ip_address>:2377
Run the above command on both the Worker servers.

-------------------------------------------------------------

To check if the state of the swarm is Manager:

docker info | grep -i is\ m

-------------------------------------------------------------

To view information about nodes:

docker node ls

The * next to the node ID indicates that you’re currently connected on this node.

-------------------------------------------------------------

If you don’t have the "swarm join" command available, you can run the following command on a manager node to retrieve the join command for a worker:

docker swarm join-token worker

-------------------------------------------------------------

+Docker Volumes (July 21, 2020, 3:36 p.m.)

Docker volumes are used for data persistence in Docker. For example, if you have databases or other stateful applications you would want to user Docker Volumes for that.

---------------------------------------------------------------------------

When do we need Docker volumes?
Let's say we have a database container running on a host. A container has a virtual filesystem where the data is usually stored. Here, there is no persistence. If we were to remove the container or stop/restart the container, then the data in this virtual filesystem is gone and it starts from a fresh state. This is obviously not very practical because we want to save changes that our application is making in the database, and that's when we need Docker volumes.

---------------------------------------------------------------------------

What are Docker volumes?
On a host, we have a physical filesystem (/home/mount/data), and the way volumes work is that we plug the physical filesystem path (it could be a folder/directory), and we plug it into the container's filesystem path. So, in simple terms, a directory/folder on the host filesystem is mounted into a directory folder in the virtual filesystem of Docker. So, what happens is that when the container writes to its filesystem, it gets replicated or automatically written on the host filesystem directory, and vice versa, if we change something on the host filesystem, it automatically appears in the container as well.

---------------------------------------------------------------------------

There are 3 different types of Docker volumes and different ways of creating them:

- Host Volumes:
Usually, the way to create docker volumes is using the (docker run) command.

docker run -v <host_direcotry>:<container_directory>
docker run -v /home/mohsen/data:/var/lib/mysql/data

We just connect the connection or the references from host to container.

The main characteristic of this type is that we decide where on the host filesystem that reference is made, (which folder on the host filesystem we mount into the container).


- Anonymous Volumes:
The second type is where you create a volume just by referencing the container directory. You don't specify which directory on the host should be mounted but that's taking care of the docker itself. The directory is automatically created by docker under the (/var/lib/docker/volumes/).

docker run -v <container_directory>
docker run -v /var/lib/mysql/data

- Named Volumes:
It is an improved type of the Anonymouse Volumes type, and it specifies the name of that folder on the host filesystem, we choose the name.

docker run -v name:<container_directory>
docker run -v name:/var/lib/mysql/data

---------------------------------------------------------------------------

From the above 3 types of Docker Volumes, the mostly used one and the one we should be using in the production is actually the Named Volumes. Because there are addictional benefits to letting Docker manage those volume directories on the host.

---------------------------------------------------------------------------

When creating Docker Volumes, If you're using Docker Compose, it's the same.

version: '3'
services:
mongodb:
image: mongo
ports:
- 27017:27017
volumes:
- db-data:/var/lib/mysql/data
volumes:
db-data

---------------------------------------------------------------------------

+Docker Swarm (July 18, 2020, 2:34 p.m.)

The technology that is actually comparable with Kubernetes, is Docker Swarm.

Docker Swarm is basically an alternative to Kubernetes which is a container orchestration tool.
Instead of Kubelets (the service that enables Docker to run in Kubernetes clusters nodes), you would have services called Docker Daemons that will run on each node and instead of the Kubernetes engine, you would just have Docker, that spends those multiple nodes that make up the cluster.

-----------------------------------------------------------------------------

+WORKDIR (July 17, 2020, 2:12 a.m.)

The WORKDIR command is used to define the working directory of a Docker container at any given time. The command is specified in the Dockerfile.

Any RUN, CMD, ADD, COPY, or ENTRYPOINT command will be executed in the specified working directory.

If the WORKDIR command is not written in the Dockerfile, it will automatically be created by the Docker compiler. Hence, it can be said that the command performs mkdir and cd implicitly.


Example:

FROM ubuntu:16.04
WORKDIR /project
RUN npm install

If the project directory does not exist, it will be created. The RUN command will be executed inside the project.

---------------------------------------------------------------

Reusing WORKDIR

WORKDIR can be reused to set a new working directory at any stage of the Dockerfile. The path of the new working directory must be given relative to the current working directory.


Example:

FROM ubuntu:16.04
WORKDIR /project
RUN npm install
WORKDIR ../project2
RUN touch file1.cpp

While directories can be manually made and changed, it is strongly recommended that you use WORKDIR to specify the current directory in which you would like to work as​ it makes troubleshooting easier.

---------------------------------------------------------------

+Docker Compose commands (July 17, 2020, 1:24 a.m.)

docker-compose rm

---------------------------------------------------------------

docker-compose up --build web

docker-compose up -d --build

docker-compose up db

docker-compose -f docker-compose.prod.yml up --build -d

---------------------------------------------------------------

docker-compose run web

---------------------------------------------------------------

Stop containers and remove the volumes created by up.

docker-compose down --volumes

docker-compose down --volumes --remove-orphans


Stop containers and remove containers, networks, volumes, and images created by up.

docker-compose down --rmi all --volumes

---------------------------------------------------------------

docker-compose -f docker-compose.prod.yml run web python manage.py migrate
docker-compose -f docker-compose.prod.yml run web python manage.py collectstatic --noinput

---------------------------------------------------------------

+COPY vs ADD (July 17, 2020, 1:06 a.m.)

Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious. Consequently, the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /.

+RUN vs CMD (July 17, 2020, 12:59 a.m.)

RUN is an image build step, the state of the container after a RUN command will be committed to the container image. A Dockerfile can have many RUN steps that layer on top of one another to build the image.


CMD is the command that the container executes by default when you launch the built image. A Dockerfile will only use the final CMD defined. The CMD can be overridden when starting a container with docker run $image $other_command.

ENTRYPOINT is also closely related to CMD and can modify the way a container starts an image.

------------------------------------------------------------------

RUN - command triggers while we build the docker image.

CMD - command triggers while we launch the created docker image.

------------------------------------------------------------------

RUN - Install Python, your container now has python burnt into its image.

CMD - python hello.py, run your favorite script.

------------------------------------------------------------------

+WORKDIR vs CD (July 17, 2020, 12:53 a.m.)

RUN cd / does absolutely nothing. WORKDIR / changes the working directory for future commands.

Each RUN command runs in a new shell and a new environment (and technically a new container, though you won't usually notice this).

The ENV and WORKDIR directives before it affect how it starts up. If you have a RUN step that just changes directories, that will get lost when the shell exits and the next step will start in the most recent WORKDIR of the image.


FROM busybox
WORKDIR /tmp
RUN pwd # /tmp

RUN cd / # no effect, resets after end of RUN line
RUN pwd # still /tmp

WORKDIR /
RUN pwd # /

RUN cd /tmp && pwd # /tmp
RUN pwd # /

+Docker compose file (July 16, 2020, 10:34 p.m.)

https://docs.docker.com/compose/compose-file/

------------------------------------------------------------------------------

version: '3.8'
services:

restart: always
build: ./web/
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
env_file: env
volumes:
- ./web:/data/web
command: /usr/bin/gunicorn mydjango.wsgi:application -w 2 -b :8000


- Restart: This container should always be up, and it will restart if it crashes.

- Build: We have to build this image using a Dockerfile before running it, this specifies the directory where the Dockerfile is located.

- Expose: We expose the port 8000 to linked machines (that will be used by the NGINX container)

- Links: We need to have access to the Postgres instance using the "postgres" name (This creates a "postgres" entry in the /etc/hosts files that points to the Postgres instance IP), idem for the Redis.

- Env_file: This container will load all the environment variables from the env file.

- Volumes: We specify the different mount points we want on this instance

- Command: What command to run when starting the container? Here we start the WSGI process.

------------------------------------------------------------------------------

+Docker Compose vs Dockerfile (July 16, 2020, 8:48 p.m.)

Think of Dockerfile as a set of instructions you would tell your system administrator what to install on this brand new server. For example:
- We need a Debian linux
- Add an apache web server
- We need postgresql as well
- Install midnight commander
- When all done, copy all *.php, *.jpg, etc. files of our project into the webroot of the webserver (/var/www)


By contrast, think of docker-compose.yml as a set of instructions you would tell your system administrator how the server can interact with the rest of the world. For example:
- it has access to a shared folder from another computer,
- it's port 80 is the same as the port 8000 of the host computer,


A Dockerfile is a simple text file that contains the commands a user could call to assemble an image.

The Compose file describes "the container in its running state", leaving the details on how to build the container to Dockerfiles.


Docker Compose
- is a tool for defining and running multi-container Docker applications.
- define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- get an app running in one command by just running docker-compose up


When you define your app with Compose in development, you can use this definition to run your application in different environments such as CI, staging, and production.

docker-compose makes it easy to start up multiple containers at the same time and automatically connect them together with some form of networking.

--------------------------------------------------------------------

Example Dockerfile:

FROM ubuntu:latest
MAINTAINER john doe

RUN apt-get update
RUN apt-get install -y python python-pip wget
RUN pip install Flask

ADD hello.py /home/hello.py

WORKDIR /home

--------------------------------------------------------------------

Example, docker-compose.yml:

version: "3"
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}

--------------------------------------------------------------------

+Cheat Sheet (July 13, 2020, 1:11 p.m.)

## List Docker CLI commands
docker
docker container --help

## Display Docker version and info
docker --version
docker version
docker info

## Execute Docker image. Pulls hello-world container form Dcker Hub.
docker run hello-world

## List Docker images
docker image ls

## List Docker containers (running, all, all in quiet mode)
docker container ls
docker container ls --all
docker container ls -aq

## This will open a terminal running a light linux distro named busybox
docker container run -i -t busybox /bin/sh

## Builds the docker container from the docker file. Must be in the root of the project. Uses caching. Don't miss the dot at he end of the command.
docker build .

## Builds the docker container from scratch (no caching). Don't miss the dot at he end of the command.
docker build --pull --no-cache .

## WARNING! This will remove:
## - all stopped containers
## - all networks not used by at least one container
## - all dangling images
## - all dangling build cache
docker system prune

## Build images before starting containers.
docker-compose up --build

## Creates a network for the reverse-proxy app.
docker network create reverse-proxy

## Starts the docker container in the background. i.e. when the command prompt is closed the container continues to run.
docker-compose up -d

## Stops the container in the current directory.
docker-compose down .

## Allows you to interactively work with containers.
docker exec -it <Container Name> /bin/sh

## Will get the ip address of a running container.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" my-running-site

+Change container hostname (July 12, 2020, 1:10 p.m.)

To change the hostname of a running container, you can use the "nsenter" command.


1- Using the command "docker container ls" from the column "COMMAND" find the command name of the docker container.


2- list the namespaces on the host with the "lsns" command:
lsns


3- Find the PID of related to the COMMAND you found from step 1.


4- nsenter --target 14145 --uts hostname gitlab.mohsenhassani.ir

+Dockerfile (July 11, 2020, 5:40 p.m.)

A Dockerfile is a file that contains a list of instructions that Docker should follow when creating our image.


Docker has two stages:
- Build stage
- Run stage

+Pipenv (July 11, 2020, 2:18 p.m.)

pipenv install --system --deploy --ignore-pipfile


Use --system flag, so it will install all packages into the system python, and not into the virtualenv. Since docker containers do not need to have virtualenvs.

Use --deploy flag, to stop your build fail if your Pipfile.lock is out of date.

Use --ignore-pipfile, so it won't mess with our setup.

+Docker behind socsk proxy (Oct. 24, 2018, 2:59 p.m.)

1- mkdir -p /etc/systemd/system/docker.service.d

2- vim /etc/systemd/system/docker.service.d/http-proxy.conf

3-
[Service]
Environment="HTTP_PROXY=socks5://127.0.0.1:1080/"

4- systemctl daemon-reload

5- systemctl restart docker

+Commands (Oct. 24, 2018, 3:22 p.m.)

docker run <image>
This command will download the image, if it is not already present, and runs it as a container.

----------------------------------------------------------------

docker start <name | id>

----------------------------------------------------------------

Get the process ID of the container
docker inspect container | grep Pid

----------------------------------------------------------------

Stop a running container:
docker stop ContainerID

----------------------------------------------------------------

We can see the ports by running:
docker port InstanceID

----------------------------------------------------------------

See the top processes within a container:
docker top ContainerID

----------------------------------------------------------------

docker images

docker images -q
q − It tells the Docker command to return the Image IDs only.

----------------------------------------------------------------

docker inspect <image>
The output will show detailed information on the Image.

----------------------------------------------------------------

docker ps [-a include stopped containers]
OR
docker container ls

----------------------------------------------------------------

Statistics of a running container:
docker stats ContainerID
The output will show the CPU and Memory utilization of the Container.

----------------------------------------------------------------

Delete a container:
docker rm ContainerID

----------------------------------------------------------------

Pause the processes in a running container:
docker pause ContainerID
The above command will pause the processes in a running container.

----------------------------------------------------------------

docker unpause ContainerID

----------------------------------------------------------------

Kill the processes in a running container
docker kill ContainerID

--------------------------------------------------------------

Attach to a running container:
docker attach ContainerID

I think this will hang/freeze, or I can't have any outputs. Use the following command instead:
docker exec -it <container-id> bash

----------------------------------------------------------------

docker pull gitlab/gitlab-ce

----------------------------------------------------------------

Listing All Docker Networks:
docker network ls

----------------------------------------------------------------

Inspecting a Docker network:
If you want to see more details on the network associated with Docker, you can use the Docker network inspect command.
docker network inspect networkname
Example:
docker network inspect bridge

----------------------------------------------------------------

docker logs -f <name>

----------------------------------------------------------------

--detach --name

----------------------------------------------------------------

See all the commands that were run with an image via a container:
docker history ImageID

----------------------------------------------------------------

Removing Docker Images:
docker rmi ImageID

----------------------------------------------------------------

Set the hostname inside the container:
--hostname gitlab.mohsenhassani.com

----------------------------------------------------------------

docker run centos -it /bin/bash
The -it argument is used to mention that we want to run in interactive tty mode.
/bin/bash is used to run the bash shell once CentOS is up and running.

----------------------------------------------------------------

docker run -p 8080:8080 -p 50000:50000 jenkins

The -p is used to map the port number of the internal Docker image to our main Ubuntu server so that we can access the container accordingly.

----------------------------------------------------------------

Tell Docker to expose the HTTP and SSH ports from GitLab on ports 30080 and 30022, respectively.

--publish 30080:80

--publish 30022:22

----------------------------------------------------------------

See information on the Docker running on the system:

docker info

Return Value

The output will provide the various details of the Docker installed on the system such as:

Number of containers
Number of images
The storage driver used by Docker
The root directory used by Docker
The execution driver used by Docker

----------------------------------------------------------------

Stop all running containers:
docker stop $(docker ps -a -q)

Delete all stopped containers:
docker rm $(docker ps -a -q)

----------------------------------------------------------------

docker volume ls

docker volume rm <volume>

----------------------------------------------------------------

+Docker Compose - Installation (Oct. 24, 2018, 8:31 p.m.)

- You can download the latest version from the following link:
https://github.com/docker/compose/releases

- For Linux, download this file:
docker-compose-Linux-x86_64

For example:
wget -O /usr/bin/docker-compose https://github.com/docker/compose/releases/download/1.27.2/docker-compose-Linux-x86_64

chmod +x /usr/bin/docker-compose

------------------------------------------------------------------------

This will NOT install the latest version. Follow the instructions above for having the latest tool.

apt install docker-compose

------------------------------------------------------------------------

Docker-compose is an optional tool that you can use with Docker, just to make it easier to interact with Docker. It's very useful for the development environment, it lets you describe multiple different docker containers and how they should run and also let you forward ports and set dependencies between your containers.

------------------------------------------------------------------------

+Difference between image and container (Dec. 14, 2018, 1:02 a.m.)

An instance of an image is called a container. When the image is started, you have a running container of this image. You can have many running containers of the same image.


You can see all your images with "docker images" whereas you can see your running containers with "docker ps" (and you can see all containers with docker ps -a).

+Command Examples - docker run (Dec. 14, 2018, 1:36 a.m.)

docker run -v /full/path/to/html/directory:/usr/share/nginx/html:ro -p 8080:80 -d nginx

-v /full/path/to/html/directory:/usr/share/nginx/html:ro
Maps the directory holding our web page to the required location in the image. The ro field instructs Docker to mount it in read-only mode. It’s best to pass Docker the full paths when specifying host directories.

-p 8080:80 maps network service port 80 in the container to 8080 on our host system.

-d detaches the container from our command line session. We don’t want to interact with this container.

----------------------------------------------------------------------

docker run --name foo -d -p 8080:80 mynginx

- name foo gives the container a name, rather than one of the randomly assigned names.

----------------------------------------------------------------------

docker run busybox echo "hello from busybox"

----------------------------------------------------------------------

-P will publish all exposed ports to random ports

We can see the ports by running:
docker port InstanceID

----------------------------------------------------------------------

docker run -d -p 80:80 my_image service nginx start

----------------------------------------------------------------------

docker run -d -p 80:80 my_image nginx -g 'daemon off;'

----------------------------------------------------------------------

Restart policies

--restart=always
Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.


--restart=always
Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.


--restart=unless-stopped
Always restart the container regardless of the exit status, including on daemon startup, except if the container was put into a stopped state before the Docker daemon was stopped.

----------------------------------------------------------------------

VOLUME (shared filesystems):

-v, --volume=[host-src:]container-dest[:<options>]: Bind mount a volume.
The comma-delimited `options` are [rw|ro], [z|Z], [[r]shared|[r]slave|[r]private], and [nocopy].

The 'host-src' is an absolute path or a name value.
If neither 'rw' or 'ro' is specified then the volume is mounted in read-write mode.

The `nocopy` mode is used to disable automatically copying the requested volume path in the container to the volume storage location.
For named volumes, `copy` is the default mode. Copy modes are not supported for bind-mounted volumes.

--volumes-from="": Mount all volumes from the given container(s)

----------------------------------------------------------------------

USER

-u="", --user="": Sets the username or UID used and optionally the groupname or GID for the specified command.

----------------------------------------------------------------------

WORKDIR

The default working directory for running binaries within a container is the root directory (/), but the developer can set a different default with the Dockerfile WORKDIR command. The operator can override this with:

-w="": Working directory inside the container

----------------------------------------------------------------------

docker run \
--rm \
--detach \
--env KEY=VALUE \
--ip 10.10.9.75 \
--publish 3000:3000 \
--volume my_volume \
--name my_container \
--tty --interactive \
--volume /my_volume \
--workdir /app \
IMAGE bash

----------------------------------------------------------------------

--rm Automatically remove the container when it exits. The alternative would be to manually stop it and then remove it.

----------------------------------------------------------------------

+Managing Ports (Dec. 14, 2018, 1:24 a.m.)

In Docker, the containers themselves can have applications running on ports. When you run a container, if you want to access the application in the container via a port number, you need to map the port number of the container to the port number of the Docker host.

To understand what ports are exposed by the container, you should use the Docker inspect command to inspect the image:
docker inspect jenkins

The output of the inspect command gives a JSON output. If we observe the output, we can see that there is a section of "ExposedPorts" and see that there are two ports mentioned. One is the data port of 8080 and the other is the control port of 50000.

To run Jenkins and map the ports, you need to change the Docker run command and add the ‘p’ option which specifies the port mapping. So, you need to run the following command:

docker run -p 8080:8080 -p 50000:50000 jenkins

The left-hand side of the port number mapping is the Docker host port to map to and the right-hand side is the Docker container port number.

+Docker Network (Dec. 14, 2018, 2:03 a.m.)

When docker is installed, it creates three networks automatically.
docker network ls

NETWORK ID NAME DRIVER SCOPE
c2c695315b3a bridge bridge local
a875bec5d6fd host host local
ead0e804a67b none null local

--------------------------------------------------------------------

The bridge network is the network in which containers are run by default. So that means when we run a container, it runs in this bridge network. To validate this, let's inspect the network:

docker network inspect bridge

--------------------------------------------------------------------

You can see that our container is listed under the Containers section in the output. What we also see is the IP address this container has been allotted - 172.17.0.2.

--------------------------------------------------------------------

Defining our own networks:

docker network create my-network-net
docker run -d --name es --net my-network-net -p 9200:9200 -p 9300:9300

--------------------------------------------------------------------

+When to use --hostname in docker? (Dec. 15, 2018, 2:55 a.m.)

The --hostname flag only changes the hostname inside your container. This may be needed if your application expects a specific value for the hostname. It does not change DNS outside of docker, nor does it change the networking isolation, so it will not allow others to connect to the container with that name.

You can use the container name or the container's (short, 12 character) id to connect from container to container with docker's embedded dns as long as you have both containers on the same network and that network is not the default bridge.

+Installation (Feb. 28, 2017, 10:31 a.m.)

Debian:

1- Install packages to allow apt to use a repository over HTTPS:
apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

2- Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

4- Use the following command to set up the stable repository.
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

5- apt update

6- apt install docker-ce docker-ce-cli containerd.io

7- To make working with Docker easier, add your username to the Docker users group.
sudo usermod -aG docker mohsen

------------------------------------------------------------------

Fedora:

Install Community Edition (CE)

1- Install the dnf-plugins-core package which provides the commands to manage your DNF repositories from the command line.
dnf -y install dnf-plugins-core


2- Use the following command to set up the stable repository. (You might need a proxy)
proxychains4 dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo

3- Install the latest version of Docker CE: (You might need a proxy)
dnf install docker-ce

------------------------------------------------------------------

+Introduction (Feb. 27, 2017, 12:30 p.m.)

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
------------------------------------------------------------
In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.
------------------------------------------------------------
Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
------------------------------------------------------------
Docker can be integrated into various infrastructure tools, including Amazon Web Services, Ansible, CFEngine, Chef, Google Cloud Platform, IBM Bluemix, HPE Helion Stackato, Jelastic, Jenkins, Kubernetes, Microsoft Azure, OpenStack Nova, OpenSVC, Oracle Container Cloud Service, Puppet, Salt, Vagrant, and VMware vSphere Integrated Containers.

ELK Stack
+beats (May 19, 2019, 9:05 p.m.)

This input plugin enables Logstash to receive events from the Elastic Beats framework.

The following example shows how to configure Logstash to listen on port 5044 for incoming Beats connections and to index into Elasticsearch:

input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

+Difference between Logstash and Beats (May 19, 2019, 9:01 p.m.)

Beats are lightweight data shippers that you install as agents on your servers to send specific types of operational data to Elasticsearch. Beats have a small footprint and use fewer system resources than Logstash.

Logstash has a larger footprint, but provides a broad array of input, filter, and output plugins for collecting, enriching, and transforming data from a variety of sources.

+Elasticsearch cat APIs (April 22, 2019, 1:24 a.m.)

To check the cluster health, we will be using the _cat API.

cat APIs

JSON is great… for computers. Even if it’s pretty-printed, trying to find relationships in the data is tedious. Human eyes, especially when looking at a terminal, need compact and aligned text. The cat API aims to meet this need.

-------------------------------------------------------------

curl '127.0.0.1:9200/_cat/master?v'

_cat/master?help

-------------------------------------------------------------

List All Indices:
curl '127.0.0.1:9200/_cat/indices?v'

-------------------------------------------------------------

+Installation (April 19, 2019, 10:25 p.m.)

apt install openjdk-8-jdk apt-transport-https curl nginx libpcre3-dev

----------------------------------------------------------------------

Elasticsearch
-----------------

1- wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

2- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

3- apt update

4- apt install elasticsearch

5- Uncomment the following options from the file "/etc/elasticsearch/elasticsearch.yml"
network.host: localhost
http.port: 9200

6-
systemctl restart elasticsearch
systemctl enable elasticsearch

7- Check the status of the elasticsearch server: (Its server takes time to start listening.)
curl -X GET http://localhost:9200

----------------------------------------------------------------------

Kibana
---------

1- apt install kibana

2- systemctl enable kibana

3-
echo "admin:$(openssl passwd -apr1 my_password)" | sudo tee -a /etc/nginx/htpasswd.kibana

4- vim /etc/nginx/sites-enabled/kibana
server {
listen 80;
server_name logs.mhass.ir logs.mohsenhassani.com;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

5- systemctl restart nginx

----------------------------------------------------------------------

Logstash
-----------

1- apt install logstash


2- Create a logstash filter config file in "/etc/logstash/conf.d/logstash.conf", with this content:
input {
tcp {
port => 4300 # optional port number
codec => json
}
}

filter { }

output {
elasticsearch { }
stdout { } # or stdout {codec => json} in case you want to see the data in logs for debugging
}


3- Restart logstash services:
systemctl restart logstash
systemctl enable logstash


----------------------------------------------------------------------

For debugging:
tcpdump -nti any port 4300
tail -f /var/log/syslog
tail -f /var/log/logstash/logstash*.log

----------------------------------------------------------------------

+Introduction / Definitions (April 19, 2019, 10:24 p.m.)

First Underlying Layer: Logstash + Beats

Upper Layer: Elasticsearch

Upper Layer: Kibabana

------------------------------------------------------

"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.

Elasticsearch is a search and analytics engine.

Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.

Kibana lets users visualize data with charts and graphs in Elasticsearch.

------------------------------------------------------

Elasticsearch is a distributed, RESTful search and analytics NoSQL engine based on Lucene.

Logstash is a light-weight data processing pipeline for managing events and logs from a wide variety of sources.

Kibana is a web application for visualizing data that works on top of Elasticsearch.

------------------------------------------------------

The Elastic Stack is the next evolution of the ELK Stack.

------------------------------------------------------

English
+Swiss vs Switzerland (July 27, 2020, 1:50 p.m.)

Switzerland is the name in English of the country located in Europe between Germany, Italy, France, and Austria.

Swiss is the adjective to describe things from Switzerland and it is the name of the people who live in Switzerland.

+‘Dot’, ‘period’, ‘full stop’, and ‘point’ (June 24, 2020, 4:36 p.m.)

The little dot which you can find at the end of a sentence is called a period in American English and full stop in British English.


The term dot is used when pronouncing the character in domain names.


The term point refers to the dot glyph used in numbers to separate the fractional part from the integer part (unlike many other languages, English uses a decimal point, not a decimal comma).

+Technical (March 26, 2020, 8:06 p.m.)

I just started building a medium sized app.

open-source products

deploy

executable binaries

executable path

object-oriented

type-safe

Kotlin is 100% interoperable with Java.

Kotlin brings in nice features out-of-box over Java.

high availability

data formats

XML is actually based on the Standard Generalized Markup Language (SGML), which has been used by the publishing industry for decades.

"boilerplate code" is code that almost every program needs

Widgets are really user interface components. They're not just about the visuals, they all can contain logic.

You would define what should happen when a button is tapped.

There is simply a TCP socket waiting in accept mode on a cloud Google server.

Change is the inevitable fate of a code.

+Vocabulary (March 20, 2020, 7 p.m.)

Realtor: a real estate agent

nurture: care for and encourage the growth or development of

sniffle: sniff slightly or repeatedly, typically because of a cold or fit of crying.

fungus = fungi: mushrooms

spare key

holster: cover, case, a holder for carrying a handgun

contradict: deny the truth of (a statement)

a security footage: a length of film made for movies or television

FlexBox
+Make body skrinkable and extensible (Sept. 18, 2019, 1:05 a.m.)

.holy-grail, .holy-grail-body {
display: flex;
flex: 1 1 auto;
flex-direction: column;
}

Flutter
+Parsing complex JSON in Flutter (March 29, 2020, 6:53 p.m.)

https://medium.com/flutter-community/parsing-complex-json-in-flutter-747c46655f51

+BloC Description (March 26, 2020, 8:27 p.m.)

Think of a stream as a pipe filled with water that flows from the A-side to the B-side.

Let’s say you are in side-A and want to send some colorful tiny children’s balls to side-B. You sink these balls one after the other inside the pipe and the water will transport them to side-B one by one in a stream fashion.

The balls exit the pipe from side-B. Let’s say they fall and make a noise. Let’s say there is another person inside-B waiting for the balls. Because this person doesn’t know when exactly a ball arrives, he decided to read a newspaper. It’s only when he hears the sound of a ball that he is aware of the arrival of a ball. At that time, he can catch the ball and make use of it.


In real BloC:
- The pipe is StreamController
- The flow of water is StreamController.stram
- The action of pushing balls from A-side is StreamController.sink
- The colorful children’s balls are data of any type
- The person in the side-B listening to the ball falling is StreamController.stram.listen.

-------------------------------------------------------------------------

For each of your variables you need to define five things:

1- Your variable name
2- StreamController
3- Stream
4- Sink
5- Close StreamController

-------------------------------------------------------------------------

class YourBloc {

var yourVar;

final yourVarController = StreamController<yourType>();

Stream<yourType> get yourVarStream => counterController.stream;

StreamSink<yourType> get yourVarSink => counterController.sink;

yourMethod() {

// some logic staff;
yourVar = yourNewValue;
yourVarSink.add(yourVar);
}

dispose() {
yourVarController.close();
}
}

-------------------------------------------------------------------------

+setState method description (March 26, 2020, 8:13 p.m.)

Flutter is declarative. This means that Flutter rebuilds its user interface (UI) from scratch to reflect the current state of your app each time setState() method is called.

+BLoC (March 26, 2020, 2:28 p.m.)

BLoC stands for Business Logic Controller. It was created by Google and introduced at Google I/O 2018. It is created based on Streams and Reactive Programming.

These are the classes that act as a layer between data and UI components. The BLoC listens to events passed from it, and after receiving a response, it emits an appropriate state.

-------------------------------------------------------------------------

StreamController:

Allows sending data, error and done events on its stream. This class can be used to create a simple stream that others can listen on, and to push events to that stream.

-------------------------------------------------------------------------

BloC pattern is often used with the third party library RxDart because it has many features not available in the standard dart StreamController.

-------------------------------------------------------------------------

+Useful LInks (March 21, 2020, 11 p.m.)

Hiding the Bottom Navigation Bar on Scroll:

https://medium.com/flutter/getting-to-the-bottom-of-navigation-in-flutter-b3e440b9386

--------------------------------------------------------------

https://medium.com/@theboringdeveloper/common-bottom-navigation-bar-flutter-e3693305d2d

--------------------------------------------------------------

Git
+Sync a branch with master (Sept. 21, 2020, 2:49 p.m.)

git checkout develop

git pull origin master --rebase

+Status (Sept. 20, 2020, 5:26 p.m.)

git status

git status -s

+Get remote URL (Sept. 17, 2020, 10:14 p.m.)

To get only the remote URL:

git config --get remote.origin.url


Then change the URL from:
git@gitlab.mohsenhassani.com:Mohsen/tiptong.git
To
https://gitlab.mohsenhassani.com/mohsen/tiptong.git

---------------------------------------------------------------

In order to get more details about a particular remote, use:

git remote show origin

---------------------------------------------------------------

+Log (May 6, 2020, 1:21 p.m.)

git log


git log --pretty=oneline


git log --pretty=oneline --abbrev-commit


git log --pretty=oneline --author="Mohsen"

+See changes of a commit (April 27, 2020, 10:48 a.m.)

git show <COMMIT>

---------------------------------------------------------------------------------------

Shows the changes made in the most recent commit:

git show

---------------------------------------------------------------------------------------

+View logs of a user's commits (April 27, 2020, 10:47 a.m.)

git log --author="Mohsen"

+See changes before pulling (March 15, 2020, 4:39 p.m.)

1- Fetch the changes from the remote:
git fetch origin


2- Show commit logs of the changes:
git log develop ..origin/develop


3- Show diffs of changes:
git diff develop..origin/develop


4- Apply the changes by merge:
git merge origin/develop
Or just pull the changes:
git pull

+Clone a specific branch (Feb. 20, 2020, 10:25 p.m.)

git clone -b <branch> <remote_repo>

+Server certificate verification failed. CAfile (Feb. 20, 2020, 7:16 p.m.)

git config --global http.sslverify "false"

+git clean (Feb. 19, 2020, 2:23 p.m.)

Remove files from your working directory that are not tracked.

If you change your mind, there is often no retrieving the content of those files.

-----------------------------------------------------------------

A safer option is to run git stash --all to remove everything but save it in a stash.

-----------------------------------------------------------------

git clean


--dry-run
git clean -d -n


interactive mode
git clean -x -i

-----------------------------------------------------------------

+.git/info/exclude vs .gitignore (Aug. 8, 2018, 11:36 a.m.)

gitignore is applied to every clone of the repo (it comes along as a versioned file),
.git/info/exclude only applies to your local copy of the repository.

-----------------------------------------------------------------------

The advantage of .gitignore is that it can be checked into the repository itself, unlike .git/info/exclude. Another advantage is that you can have multiple .gitignore files, one inside each directory/subdirectory for directory-specific ignore rules, unlike .git/info/exclude.

So, .gitignore is available across all clones of the repository. Therefore, in large teams, all people are ignoring the same kind of files Example *.db, *.log. And you can have more specific ignore rules because of multiple .gitignore.

.git/info/exclude is available for individual clones only, hence what one person ignores in his clone is not available in some other person's clone. For example, if someone uses Eclipse for development it may make sense for that developer to add a .build folder to .git/info/exclude because other devs may not be using Eclipse.

In general, files/ignore rules that have to be universally ignored should go in .gitignore, and otherwise files that you want to ignore only on your local clone should go into .git/info/exclude

+Change Remote Origin (Oct. 24, 2018, 2:26 p.m.)

git remote rm origin
git remote add origin git@github.com:username/repositoryName.git
git config master.remote origin
git config master.merge refs/heads/master

+Force Push (Oct. 14, 2018, 2:13 p.m.)

git push https://git.... --force

git push --force origin .....

git push https://git.... -f

git push -f origin .....

+Cancel a local git commit (Feb. 25, 2019, 4:04 p.m.)

Unstage all changes that have been added to the staging area:
To undo the most recent add, but not committed, files/folders:

git reset .

---------------------------------------------------------

Undo most recent commit:
git reset HEAD~1

---------------------------------------------------------

+Delete from reflog (Feb. 25, 2019, 4:04 p.m.)

git reflog delete HEAD@{3}

+Revert changes (Feb. 25, 2019, 4:03 p.m.)

Reverting a single file

If the file isn’t committed:

git checkout filename



If the file is already committed:
# filename is the path to your file, abcde is the hash of the commit you want to switch to.

git checkout abcde filename

or

git reset abcde filename

------------------------------------------------------------------

Unstaged local changes (before you commit)

Discard all local changes, but save them for possible re-use later:
git stash

Discarding local changes (permanently) to a file:
git checkout -- <file>

Discard all local changes to all files permanently:
git reset --hard

------------------------------------------------------------------

+Comparing two branches (Feb. 25, 2019, 1:20 p.m.)

git diff branch_1 branch_2

+Rename a branch (Feb. 25, 2019, 4:01 p.m.)

1- Rename the local branch name:

If you are on the branch:
git branch -m <newname>

If you are on a different branch:
git branch -m <oldname> <newname>


2- Delete the old name remote branch and push the new name local branch:
git push origin :old-name new-name


3- Reset the upstream branch for the new-name local branch:

Switch to the branch and then:
git push origin -u new-name

+Delete a branch (Feb. 28, 2019, 9:17 a.m.)

Delete a Local GIT branch:

Use either of the following commands:
git branch -d branch_name
git branch -D branch_name


The -d option stands for --delete, which would delete the local branch, only if you have already pushed and merged it with your remote branches.

The -D option stands for --delete --force, which deletes the branch regardless of its push and merge status, so be careful using this one!

------------------------------------------------------

Delete a remote GIT branch:

Use either of the following commands:
git push <remote_name> --delete <branch_name>
git push <remote_name> :<branch_name>

------------------------------------------------------

Push to remote branch and delete:

If you ever want to push your local branch to remote and delete your local, you can use git push with the -d option as an alias for --delete.

------------------------------------------------------

+Fetch vs Pull (March 2, 2019, 10:03 a.m.)

In the simplest terms, git pull does a git fetch followed by a git merge.

---------------------------------------------------

git fetch only downloads new data from a remote repository - but it doesn't integrate any of this new data into your working files. Fetch is great for getting a fresh view of all the things that happened in a remote repository.

---------------------------------------------------

git pull, in contrast, is used with a different goal in mind: to update your current HEAD branch with the latest changes from the remote server. This means that pull not only downloads new data; it also directly integrates it into your current working copy files. This has a couple of consequences:

Since "git pull" tries to merge remote changes with your local ones, a so-called "merge conflict" can occur.

Like for many other actions, it's highly recommended to start a "git pull" only with a clean working copy. This means that you should not have any uncommitted local changes before you pull. Use Git's Stash feature to save your local changes temporarily.

+Merge (March 4, 2019, 12:29 p.m.)

Switch to the production branch and:
git merge other_branch

+Untracking/Re-indexing files based on .gitignore (March 4, 2019, 1:05 a.m.)

git add .

git commit -m "Some Message"

git push origin master

git rm -r --cached .

git add .

git commit -m "Reindexing..."

+Stash (March 4, 2019, 3:57 p.m.)

git stash

git stash pop

git stash list

git stash apply

git stash show stash@{0}

git stash apply --index

------------------------------------------------------------------------

git stash --patch

Git will not stash everything that is modified but will instead prompt you interactively which of the changes you would like to stash and which you would like to keep in your working directory.

------------------------------------------------------------------------

Creating a Branch from a Stash

git stash branch <new branchname>

------------------------------------------------------------------------

+Submodule (Nov. 29, 2017, 6:17 p.m.)

1- CD to the path you need the module get cloned.

2- git submodule add https://github.com/ceph/ceph-ansible.git

-----------------------------------------------------------

In case of this error raises:
blah blah already exists in the index :-D
git rm --cached blah blah
and you should also delete the files from this path:
rm -rf .git/modules/...

-----------------------------------------------------------

To remove a submodule you need to:

Delete the relevant section from the .gitmodules file.
Stage the .gitmodules changes git add .gitmodules
Delete the relevant section from .git/config.
Run git rm --cached path_to_submodule (no trailing slash).
Run rm -rf .git/modules/path_to_submodule
Commit git commit -m "Removed submodule <name>"
Delete the now untracked submodule files
rm -rf path_to_submodule

-----------------------------------------------------------

+Commands (July 29, 2017, 11:26 a.m.)

git pull

-------------------------------------------------

git fetch

-------------------------------------------------

git pull master

-------------------------------------------------

Create a branch:
git checkout -b branch_name

-------------------------------------------------

Work on an existing branch:
git checkout branch_name

-------------------------------------------------

View the changes you've made:
git status

-------------------------------------------------

View differences:
git diff

-------------------------------------------------

Delete all changes in the Git repository:
To delete all local changes in the repository that have not been added to the staging area, and leave unstaged files/folders, type:

git checkout .

-------------------------------------------------

Delete all untracked changes in the Git repository:
git clean -f

-------------------------------------------------

Unstage all changes that have been added to the staging area:
To undo the most recent add, but not committed, files/folders:

git reset .

-------------------------------------------------

Undo most recent commit:
git reset HEAD~1

-------------------------------------------------

Merge created branch with master branch:

You need to be in the created branch.

git checkout NAME-OF-BRANCH
git merge master

-------------------------------------------------

Merge master branch with created branch:

You need to be in the master branch.

git checkout master
git merge NAME-OF-BRANCH

-------------------------------------------------

+Diff (July 29, 2017, 11:17 a.m.)

If you want to see what you haven't git added yet:
git diff myfile.txt

or if you want to see already-added changes
git diff --cached myfile.txt

+Modify existing / unpushed commits (Jan. 28, 2017, 3:12 p.m.)

git commit --amend -m "New commit message"

+Delete file from repository (Jan. 28, 2017, 3:04 p.m.)

If you deleted a file from the working tree, then commit the deletion:
git add . -A
git commit -m "Deleted some files..."
git push origin master

----------------------------------------------------------------------

Remove a file from a Git repository without deleting it from the local filesystem:
git rm --cached <filename>
git rm --cached -r <dir_name>
git commit -m "Removed folder from repository"
git push origin master

+.gitingore Rules (Jan. 28, 2017, 2:56 p.m.)

A blank line matches no files, so it can serve as a separator for readability.

A line starting with # serves as a comment.

An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again. If a negated pattern matches, this will override lower precedence patterns sources.

If the pattern ends with a slash, it is removed for the purpose of the following description, but it would only find a match with a directory. In other words, foo/ will match a directory foo and paths underneath it, but will not match a regular file or a symbolic link foo (this is consistent with the way how path spec works in general in git).

If the pattern does not contain a slash /, git treats it as a shell glob pattern and checks for a match against the pathname relative to the location of the .gitignore file (relative to the top level of the work tree if not from a .gitignore file).

Otherwise, git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, Documentation/*.html matches Documentation/git.html but not Documentation/ppc/ppc.html or tools/perf/Documentation/perf.html.

A leading slash matches the beginning of the pathname. For example, /*.c matches cat-file.c but not mozilla-sha1/sha1.c.

+Examples (Aug. 21, 2014, 1:29 p.m.)

cd my_project
git init
git remote add origin https://MohsenHassani@bitbucket.org/MohsenHassani/my_project.git
git commit -m 'initial commit'
git push origin master

--------------------mkdir my_project---------------------------------------------------------------------

After each change in project:
git add .
git commit -m '<the comment>'
git push origin master

-----------------------------------------------------------------------------------------

git config http.postBuffer 1048576000
git config --global user.name "Mohsen Hassani"
git config --global user.email "mohsen@mohsenhassani.com"
git config --global color.ui true
git config --global color.status auto
git config --global color.branch auto
git config --list
git log

git add -A .
git commit -m "File nonsense.txt is now removed"

git commit -m "message with a tpyo here"
git commit --amend -m "More changes - now correct"

git remote
git remote -v

export http_proxy=http://proxy:8080
// Set proxy for git globally
git config --global http.proxy http://proxy:8080
// To check the proxy settings
git config --get http.proxy
// Just in case you need to you can also revoke the proxy settings
git config --global --unset http.proxy

Gitlab
+Gitlab CE with Docker Compose (Sept. 20, 2020, 11:07 a.m.)

version: "3.8"
services:
gitlab:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.arisyar.local'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.arisyar.local'
ports:
- '80:80'
- '443:443'
- '4022:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'

+Gitlab Flow (Oct. 8, 2018, 3:08 p.m.)

In git you add files from the working copy to the staging area. After that you commit them to the local repo. The third step is pushing to a shared remote repository. After getting used to these three steps the branching model becomes the challenge.


Since many organizations new to git have no conventions how to work with it, it can quickly become a mess. The biggest problem they run into is that many long running branches that each contain part of the changes are around. People have a hard time figuring out which branch they should develop on or deploy to production. Frequently the reaction to this problem is to adopt a standardized pattern such as git flow and GitHub flow. We think there is still room for improvement and will detail a set of practices we call GitLab flow.


Git flow and its problems:
Git flow was one of the first proposals to use git branches and it has gotten a lot of attention. It advocates a master branch and a separate develop branch as well as supporting branches for features, releases and hotfixes. The development happens on the develop branch, moves to a release branch and is finally merged into the master branch. Git flow is a well defined standard but its complexity introduces two problems. The first problem is that developers must use the develop branch and not master, master is reserved for code that is released to production. It is a convention to call your default branch master and to mostly branch from and merge to this. Since most tools automatically make the master branch the default one and display that one by default it is annoying to have to switch to another one. The second problem of git flow is the complexity introduced by the hotfix and release branches. These branches can be a good idea for some organizations but are overkill for the vast majority of them. Nowadays most organizations practice continuous delivery which means that your default branch can be deployed. This means that hotfix and release branches can be prevented including all the ceremony they introduce. An example of this ceremony is the merging back of release branches. Though specialized tools do exist to solve this, they require documentation and add complexity. Frequently developers make a mistake and for example changes are only merged into master and not into the develop branch. The root cause of these errors is that git flow is too complex for most of the use cases. And doing releases doesn't automatically mean also doing hotfixes.


GitHub flow as a simpler alternative:
In reaction to git flow a simpler alternative was detailed, GitHub flow. This flow has only feature branches and a master branch. This is very simple and clean, many organizations have adopted it with great success. Atlassian recommends a similar strategy although they rebase feature branches. Merging everything into the master branch and deploying often means you minimize the amount of code in 'inventory' which is in line with the lean and continuous delivery best practices. But this flow still leaves a lot of questions unanswered regarding deployments, environments, releases and integrations with issues. With GitLab flow we offer additional guidance for these questions.


Production branch with GitLab flow:
GitHub flow does assume you are able to deploy to production every time you merge a feature branch. This is possible for e.g. SaaS applications, but there are many cases where this is not possible. One would be a situation where you are not in control of the exact release moment, for example an iOS application that needs to pass App Store validation. Another example is when you have deployment windows (workdays from 10am to 4pm when the operations team is at full capacity) but you also merge code at other times. In these cases you can make a production branch that reflects the deployed code. You can deploy a new version by merging in master to the production branch. If you need to know what code is in production you can just checkout the production branch to see. The approximate time of deployment is easily visible as the merge commit in the version control system. This time is pretty accurate if you automatically deploy your production branch. If you need a more exact time you can have your deployment script create a tag on each deployment. This flow prevents the overhead of releasing, tagging and merging that is common to git flow.


Environment branches with GitLab flow:
It might be a good idea to have an environment that is automatically updated to the master branch. Only in this case, the name of this environment might differ from the branch name. Suppose you have a staging environment, a pre-production environment and a production environment. In this case the master branch is deployed on staging. When someone wants to deploy to pre-production they create a merge request from the master branch to the pre-production branch. And going live with code happens by merging the pre-production branch into the production branch. This workflow where commits only flow downstream ensures that everything has been tested on all environments. If you need to cherry-pick a commit with a hotfix it is common to develop it on a feature branch and merge it into master with a merge request, do not delete the feature branch. If master is good to go (it should be if you are practicing continuous delivery) you then merge it to the other branches. If this is not possible because more manual testing is required you can send merge requests from the feature branch to the downstream branches.


Release branches with GitLab flow:
Only in case you need to release software to the outside world you need to work with release branches. In this case, each branch contains a minor version (2-3-stable, 2-4-stable, etc.). The stable branch uses master as a starting point and is created as late as possible. By branching as late as possible you minimize the time you have to apply bug fixes to multiple branches. After a release branch is announced, only serious bug fixes are included in the release branch. If possible these bug fixes are first merged into master and then cherry-picked into the release branch. This way you can't forget to cherry-pick them into master and encounter the same bug on subsequent releases. This is called an 'upstream first' policy that is also practiced by Google and Red Hat. Every time a bug-fix is included in a release branch the patch version is raised (to comply with Semantic Versioning) by setting a new tag. Some projects also have a stable branch that points to the same commit as the latest released branch. In this flow it is not common to have a production branch (or git flow master branch).


Merge/pull requests with GitLab flow:
Merge or pull requests are created in a git management application and ask an assigned person to merge two branches. Tools such as GitHub and Bitbucket choose the name pull request since the first manual action would be to pull the feature branch. Tools such as GitLab and others choose the name merge request since that is the final action that is requested of the assignee. In this article we'll refer to them as merge requests.

If you work on a feature branch for more than a few hours it is good to share the intermediate result with the rest of the team. This can be done by creating a merge request without assigning it to anyone, instead you mention people in the description or a comment (/cc @mark @susan). This means it is not ready to be merged but feedback is welcome. Your team members can comment on the merge request in general or on specific lines with line comments. The merge requests serves as a code review tool and no separate tools such as Gerrit and reviewboard should be needed. If the review reveals shortcomings anyone can commit and push a fix. Commonly the person to do this is the creator of the merge/pull request. The diff in the merge/pull requests automatically updates when new commits are pushed on the branch.

When you feel comfortable with it to be merged you assign it to the person that knows most about the codebase you are changing and mention any other people you would like feedback from. There is room for more feedback and after the assigned person feels comfortable with the result the branch is merged. If the assigned person does not feel comfortable they can close the merge request without merging.

In GitLab it is common to protect the long-lived branches (e.g. the master branch) so that normal developers can't modify these protected branches. So if you want to merge it into a protected branch you assign it to someone with maintainer authorizations.


Issue tracking with GitLab flow:
GitLab flow is a way to make the relation between the code and the issue tracker more transparent.

Any significant change to the code should start with an issue where the goal is described. Having a reason for every code change is important to inform everyone on the team and to help people keep the scope of a feature branch small. In GitLab each change to the codebase starts with an issue in the issue tracking system. If there is no issue yet it should be created first provided there is significant work involved (more than 1 hour). For many organizations this will be natural since the issue will have to be estimated for the sprint. Issue titles should describe the desired state of the system, e.g. "As an administrator I want to remove users without receiving an error" instead of "Admin can't remove users.".

When you are ready to code you start a branch for the issue from the master branch. The name of this branch should start with the issue number, for example '15-require-a-password-to-change-it'.

When you are done or want to discuss the code you open a merge request. This is an online place to discuss the change and review the code. Opening a merge request is a manual action since you do not always want to merge a new branch you push, it could be a long-running environment or release branch. If you open the merge request but do not assign it to anyone it is a 'Work In Progress' merge request. These are used to discuss the proposed implementation but are not ready for inclusion in the master branch yet. Pro tip: Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it's ready.

When the author thinks the code is ready the merge request is assigned to reviewer. The reviewer presses the merge button when they think the code is ready for inclusion in the master branch. In this case the code is merged and a merge commit is generated that makes this event easily visible later on. Merge requests always create a merge commit even when the commit could be added without one. This merge strategy is called 'no fast-forward' in git. After the merge the feature branch is deleted since it is no longer needed, in GitLab this deletion is an option when merging.

Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case it is no problem to reuse the same branch name since it was deleted when the branch was merged. At any time there is at most one branch for every issue. It is possible that one feature branch solves more than one issue.

+Uninstall (Oct. 23, 2018, 4:25 p.m.)

1- sudo gitlab-ctl uninstall

2- sudo gitlab-ctl cleanse

3- sudo gitlab-ctl remove-accounts

4- sudo dpkg -P gitlab-ce

5- Delete these directories:
rm -r /opt/gitlab/
rm -r /var/opt/gitlab
rm -r /etc/gitlab
rm -r /var/log/gitlab

+Docker (Dec. 15, 2018, 4:04 p.m.)

docker pull gitlab/gitlab-ce:latest

-----------------------------------------------------------

docker run -d --hostname git.mohsenhassani.com -p 30443:443 -p 3080:80 -p 3022:22 --name gitlab --restart always -v /var/docker_data/gitlab/config:/etc/gitlab -v /var/docker_data/gitlab/logs:/var/log/gitlab -v /var/docker_data/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest

-----------------------------------------------------------

+Markdown Cheatsheet (March 10, 2018, 8:14 p.m.)

https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet

+Runner - .gitlab-ci.yml sample (Feb. 14, 2018, 11:38 a.m.)

update_docs:
script:
- mkdocs build
- ssh-keyscan -H mohsenhassani.com >> ~/.ssh/known_hosts
- scp -rC site/* root@mohsenhassani.com:/var/www/html/
- ssh root@mohsenhassani.com "/etc/init.d/nginx restart"

+Send Notifications to Email (April 12, 2017, 3:03 p.m.)

https://docs.gitlab.com/omnibus/settings/smtp.html
https://docs.gitlab.com/ce/administration/troubleshooting/debug.html
-----------------------------------------------------------------
To test the mail server:
1- sudo gitlab-rails console production
-----------------------------------------------------------------
2- Look at the ActionMailer delivery_method:
ActionMailer::Base.delivery_method
-----------------------------------------------------------------
3- Check the mail settings:

If it's configured with smtp:
ActionMailer::Base.smtp_settings

If it is sendmail:
ActionMailer::Base.sendmail_settings

You may need to check your local mail logs (e.g. /var/log/mail.log) for more details.
-----------------------------------------------------------------
4- Send a test message via the console.
Notify.test_email('mohsen@mohsenhassani.com', 'Hello World', 'This is a test message').deliver_now

In case the email is not sent (after checking your mail), you can see the reason/error in:
tail -f /var/log/mail.log
-----------------------------------------------------------------
5- If you needed to change any configs, refer to this file:

vim /var/opt/gitlab/gitlab-rails/etc/gitlab.yml

OR depending on your gitlab version, maybe this one:

/etc/gitlab/gitlab.rb

And after any change to it:
gitlab-ctl reconfigure
-----------------------------------------------------------------
For fixing some problems I had to replace "sendmail" with the default "postfix".
apt install sendmail (will remove postfix and install sendmail)

In /etc/hosts I had to put the required domain names to fix the error " Sender address rejected: Domain not found".
-----------------------------------------------------------------

+Deleting a runner (March 8, 2017, 7:38 p.m.)

gitlab-runner unregister --name runner-0

For deleting all:
gitlab-runner verify --delete

+Install Gitlab Runner (Feb. 25, 2017, 3:09 p.m.)

GitLab Runner is an application that processes builds. It can be deployed separately and work with GitLab CI through an API.
In order to run tests, you need at least one GitLab instance and one GitLab Runner.

-----------------------------------------------------------

Runners:

In GitLab CI, Runners run your YAML. A Runner is an isolated (virtual) machine that picks up jobs through the coordinator API of GitLab CI. A Runner can be specific to a certain project or serve any project in GitLab CI. A Runner that serves all projects is called a shared Runner.

-----------------------------------------------------------

For installing on Docker, refer to the section below.

1- Add GitLab's official repository:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash

2- Install gitlab-ci-multi-runner:
apt install gitlab-runner

4- Register the Runner:
gitlab-runner register

-----------------------------------------------------------

Install inside a Docker container:

1- Use Docker volumes to start the Runner container:
docker volume create gitlab-runner-config

2- Start the Runner container using the volume we just created:
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v gitlab-runner-config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest

-----------------------------------------------------------

Register the Runner:

docker run --rm -it -v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner:latest register


For registering a runner, some configurations are needed. Answer them based on your GitLab Runner page (for the URL and token).

Please enter the executor:
docker

Please enter the Docker image:
alpine:latest

-----------------------------------------------------------

+Install GitLab on server (Feb. 25, 2017, 12:16 p.m.)

https://about.gitlab.com/installation/
https://about.gitlab.com/downloads/
-----------------------------------------------------------
1- Install and configure the necessary dependencies:
sudo apt-get install curl openssh-server ca-certificates postfix

2- Add the GitLab package server and install the package:
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
sudo apt-get install gitlab-ce

3- Configure and start GitLab:
sudo gitlab-ctl reconfigure

4- Browse to the hostname and login:
On your first visit, you'll be redirected to a password reset screen to provide the password for the initial administrator account. Enter your desired password and you'll be redirected back to the login screen.
The default account's username is "root". Provide the password you created earlier and login. After login you can change the username if you wish.

+Install GitLab CI (Feb. 25, 2017, 11:46 a.m.)

GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab.

https://github.com/gitlabhq/gitlab-ci/blob/5-2-stable/doc/install/installation.md
----------------------------------------------------------------
Starting from version 8.0, GitLab Continuous Integration (CI) is fully integrated into GitLab itself and is enabled by default on all projects.
----------------------------------------------------------------
GitLab offers a continuous integration service. If you add a .gitlab-ci.yml file to the root directory of your repository, and configure your GitLab project to use a Runner, then each merge request or push, triggers your CI pipeline.
----------------------------------------------------------------

Gradle
HTML
+iframe (June 3, 2018, 12:11 p.m.)

<!DOCTYPE html>
<html>
<head>
<title>Mohsen Hassani</title>
<style>
body, html {
margin: 0;
overflow: hidden;
}

iframe {
width: 100%;
height: 95vh;
border: 0;
}
</style>
</head>
<body>
<div class="iframe-link">
<iframe src="http://www.mohsenhassani.com">
Please switch to another modern browser.
</iframe>
</div>
</body>
</html>

+Favicon (Feb. 20, 2019, 11:20 a.m.)

<link rel="shortcut icon" type="image/png" href="favicon.ico" />
<link rel="apple-touch-icon" href="/custom_icon.png" />

+Conditions If (July 27, 2015, 3:02 p.m.)

You might need to change all the below condition syntaxes with this syntax:
<![if gte IE 9]>
<![endif]>
************************************************************
Target ALL VERSIONS of IE

<!--[if IE]>
<link rel="stylesheet" type="text/css" href="all-ie-only.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target everything EXCEPT IE

<!--[if !IE]><!-->
<link rel="stylesheet" type="text/css" href="not-ie.css" />
<!--<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 7 ONLY

<!--[if IE 7]>
<link rel="stylesheet" type="text/css" href="ie7.css">
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 6 ONLY

<!--[if IE 6]>
<link rel="stylesheet" type="text/css" href="ie6.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 5 ONLY

<!--[if IE 5]>
<link rel="stylesheet" type="text/css" href="ie5.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 5.5 ONLY

<!--[if IE 5.5000]>
<link rel="stylesheet" type="text/css" href="ie55.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 6 and LOWER

<!--[if lt IE 7]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />
<![endif]-->

<!--[if lte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 7 and LOWER

<!--[if lt IE 8]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />
<![endif]-->

<!--[if lte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 8 and LOWER

<!--[if lt IE 9]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />
<![endif]-->

<!--[if lte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 6 and HIGHER

<!--[if gt IE 5.5]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />
<![endif]-->

<!--[if gte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 7 and HIGHER

<!--[if gt IE 6]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />
<![endif]-->

<!--[if gte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 8 and HIGHER

<!--[if gt IE 7]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />
<![endif]-->

<!--[if gte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />
<![endif]-->

InfluxDB
+Queries (Dec. 12, 2018, 12:08 p.m.)

# influx

> show databases;

> show measurements

+Configuration (Dec. 9, 2018, 3:26 p.m.)

https://docs.influxdata.com/influxdb/v1.7/introduction/installation/#configuring-influxdb-oss

By default, InfluxDB uses the following network ports:

TCP port 8086 is used for client-server communication over InfluxDB’s HTTP API
TCP port 8088 is used for the RPC service for backup and restore

In addition to the ports above, InfluxDB also offers multiple plugins that may require custom ports. All port mappings can be modified through the configuration file, which is located at /etc/influxdb/influxdb.conf for default installations.

---------------------------------------------------------

The system has internal defaults for every configuration file setting. View the default configuration settings with the "influxd config" command.

--------------------------------------------------------

+Installation (Dec. 9, 2018, 3:24 p.m.)

https://docs.influxdata.com/influxdb/v1.7/introduction/installation/

Ubuntu & Debian installation are different. (Refer to the link above)

1- curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -

2- source /etc/lsb-release

3- echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

4- apt-get update && sudo apt-get install influxdb

5- service influxdb start

---------------------------------------------------------

+Introduction (Dec. 9, 2018, 3:12 p.m.)

https://docs.influxdata.com/influxdb/v1.7/

InfluxDB is an open-source time series database (TSDB) developed by InfluxData. It is written in Go and optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics. It also has support for processing data from Graphite.

Ionic
+Ionic Capacitor vs Apache Cordova (Nov. 1, 2019, 11:15 p.m.)

Ionic Capacitor is an open-source framework innovation to help you build Progressive Native Web, Mobile, and Desktop apps. On the other side Apache Cordova formerly PhoneGap does the same for accessing native features of the device from mobile WebView.

----------------------------------------------------------------

Using Cordova to build a mobile hybrid native app, you use Cordova plugin libraries, which behind the scene builds your app using Android SDK or iOS within the Cordova framework (cordova.js/phonegap.js).


With Ionic Capacitor, you create the app, without using any Cordova imports, not even cordova.js, instead Capacitor’s own native plugin repository imported as @capacitor/core. Capacitor can also be used without the Ionic framework and it’s backward compatible with Cordova.

----------------------------------------------------------------

In spirit, Capacitor and Cordova are very similar. Both manage a Web View and provide a structured way of exposing native functionality to your web code.

Both provide common core plugins out of the box for accessing services like Camera and the Filesystem. In fact, one of the design goals with Capacitor is to support Cordova plugins out of the box! While Capacitor doesn’t support every plugin (some are simply incompatible), it generally supports most plugins from the Cordova ecosystem.

----------------------------------------------------------------

Capacitor generally expects you to commit your native app project (Xcode, Android Studio, etc.) as a source artifact. This means it’s easy to add custom native code (for example, to integrate an SDK that requires modifying AppDelegate on iOS), build “plugins” to expose native functionality to your web app without having to actually build a standalone plugin, and also debug and manage your app in the way that embraces the best tooling for that platform.

----------------------------------------------------------------

No more deviceready!

Capacitor kills the deviceready event by loading all plugin JavaScript before your page loads, making every API available immediately. Also unlike Cordova, plugin methods are exposed directly as opposed to being called through an exec() function.

That means no more wondering why your app isn’t working and why deviceready hasn’t fired.

----------------------------------------------------------------

Embracing NPM & Easier Plugin Development

Capacitor embraces NPM for every dependency in your project, including plugins and platforms. That means you never capacitor install plugin-x, you just npm install plugin-x and then when you sync your project Capacitor will detect and automatically link in any plugins you’ve installed.

----------------------------------------------------------------

First-class Electron and PWA support

Capacitor embraces Electron for desktop functionality, along with adding first-class support for web apps and Progressive Web Apps.

----------------------------------------------------------------

+Storage (Oct. 27, 2019, 7:43 p.m.)

Installation:

ionic cordova plugin add cordova-plugin-nativestorage
npm install @ionic-native/native-storage

--------------------------------------------------------------------

Usage:

import { NativeStorage } from '@ionic-native/native-storage/ngx';

constructor(private nativeStorage: NativeStorage) { }

this.nativeStorage.setItem('myitem', {property: 'value', anotherProperty: 'anotherValue'})
.then(
() => console.log('Stored item!'),
error => console.error('Error storing item', error)
);

this.nativeStorage.getItem('myitem')
.then(
data => console.log(data),
error => console.error(error)
);

--------------------------------------------------------------------

+Capacitor - Installation (Oct. 15, 2019, 9:16 p.m.)

To add Capacitor to your web app, run the following commands:
npm install --save @capacitor/cli @capacitor/core


Then, initialize Capacitor with your app information.
npx cap init tiptong ir.tiptong.www


Next, install any of the desired native platforms:
npx cap add android
npx cap add ios
npx cap add electron

+Capacitor - Description (Oct. 15, 2019, 9:14 p.m.)

Capacitor is an open-source native container (similar to Cordova) built by the Ionic team that you can use to build web/mobile apps that run on iOS, Android, Electron (Desktop), and as Progressive Web Apps with the same code base. It allows you to access the full native SDK on each platform, and easily deploy to App Stores or create a PWA version of your application.

Capacitor can be used with Ionic or any preferred frontend framework and can be extended with plugins. It has a rich set of official plugins and you can also use it with Cordova plugins.

---------------------------------------------------------------------

The Capacitor is a Native layer for Cross-platform Web Application development, which makes it possible to use hardware features like Geolocation, Camera, Vibrations, Network, Storage, Filesystem and many more. The catch is there no need to install any plugin to use such Native feature like we used to do by installing Cordova Plugin.

---------------------------------------------------------------------

+PWA (Oct. 15, 2019, 8:54 p.m.)

Start an app:
npx create-stencil tiptong-pwa

+CLI Commands (June 28, 2019, 11:24 p.m.)

Generate a new project:
ionic start
ionic start myApp tabs


ionic serve


npm uninstall @ionic-native/splash-screen


ng add @angular/pwa


ionic build --prod


ionic generate module auth
ionic generate module auth --flat
ionic g m auth --flat


List installed plugins:
cordova plugins
cordova plugin ls

+Installation (June 28, 2019, 12:11 a.m.)

1- Install the latest version of Node.js and npm

2- sudo npm install -g ionic

Jquery
+BLOB (Jan. 10, 2020, 10:52 a.m.)

BLOB stands for a Binary Large OBject.

--------------------------------------------------------------

A BLOB can store multimedia content like Images, Videos, and Audio but it can really store any kind of binary data. Since the default length of a BLOB isn't standard, you can define the storage capacity of each BLOB to whatever you'd like up to 2,147,483,647 characters in length.

--------------------------------------------------------------

Since jQuery doesn't have a way to handle blob's, you could try using the native Blob interface.

var oReq = new XMLHttpRequest();
oReq.open("GET", "/myfile.png", true);
oReq.responseType = "arraybuffer";

oReq.onload = function(oEvent) {
var blob = new Blob([oReq.response], {type: "image/png"});
// ...
};

oReq.send();

--------------------------------------------------------------

+function - each (Dec. 21, 2019, 11:28 a.m.)

$.each(data, function(i, occupation) {
console.log(occupation['pk'], occupation['name']);
});

+TimeOuts / Inervals (April 24, 2019, 1:03 p.m.)

window.setInterval(function(){
/// call your function here
}, 5000);

-------------------------------------------------------------

$(function () {
setTimeout(runMyFunction, 10000);
});

-------------------------------------------------------------

setTimeout(expression, timeout); runs the code/function once after the timeout.

setInterval(expression, timeout); runs the code/function in intervals, with the length of the timeout between them.

-------------------------------------------------------------

setInterval repeats the call, setTimeout only runs it once.

-------------------------------------------------------------

setTimeout allows us to run a function once after the interval of time.

setInterval allows us to run a function repeatedly, starting after the interval of time, then repeating continuously at that interval

-------------------------------------------------------------

+Find element by data attribute value (July 31, 2017, 1:18 p.m.)

$("li[data-step=2]").addClass('active');

+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;
}
})();

-----------------------------------------------------------------

or you can include this after loading the jquery.js file:
<script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>

-----------------------------------------------------------------

+Call jquery code AFTER page loading (May 26, 2018, 6:07 p.m.)

$(window).on('load', function() {
$('#contact-us').click();
});

+if checkbox is checked (July 21, 2018, 11:31 a.m.)

$('#receive-sms').click(function() {
if ($(this).is(':checked')) {

}
});

+Disable Arrows on Number Inputs (Oct. 3, 2018, 12:43 p.m.)

CSS:

/* Hide HTML5 Up and Down arrows. */
input[type="number"]::-webkit-outer-spin-button, input[type="number"]::-webkit-inner-spin-button {
-webkit-appearance: none;
margin: 0;
}

input[type="number"] {
-moz-appearance: textfield;
}


---------------------------------------------------------------

jQuery(document).ready( function($) {

// Disable scroll when focused on a number input.
$('form').on('focus', 'input[type=number]', function(e) {
$(this).on('wheel', function(e) {
e.preventDefault();
});
});

// Restore scroll on number inputs.
$('form').on('blur', 'input[type=number]', function(e) {
$(this).off('wheel');
});

// Disable up and down keys.
$('form').on('keydown', 'input[type=number]', function(e) {
if ( e.which == 38 || e.which == 40 )
e.preventDefault();
});
});

---------------------------------------------------------------

+Combobox (Jan. 22, 2019, 12:43 p.m.)

Get the text value of a selected option:

$( "#myselect option:selected" ).text();

-------------------------------------------------------------

Get the value of a selected option:

$( "#myselect" ).val();

-------------------------------------------------------------

Event:

$('#my_select').change(function() {

})

-------------------------------------------------------------

+Bypass popup blocker on window.open (Jan. 20, 2018, 12:53 a.m.)

$('#myButton').click(function () {
var redirectWindow = window.open('http://google.com', '_blank');
$.ajax({
type: 'POST',
url: '/echo/json/',
success: function (data) {
redirectWindow.location;
}
});
});

+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;
}
})();
-----------------------------------------------------------------
or you can include this after loading the jquery.js file:
<script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>

+Find element by data attribute value (July 31, 2017, 1:18 a.m.)

$("li[data-step=2]").addClass('active');

+Smooth Scrolling (Feb. 21, 2017, 4:09 p.m.)

$(function() {
$('a[href*="#"]:not([href="#"])').click(function() {
if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) {
var target = $(this.hash);
target = target.length ? target : $('[name=' + this.hash.slice(1) +']');
if (target.length) {
$('html, body').animate({
scrollTop: target.offset().top
}, 1000);
return false;
}
}
});
});

+Check image width and height before upload with Javascript (Oct. 5, 2016, 3:01 a.m.)

var _URL = window.URL || window.webkitURL;
$('#upload-face').change(function() {
var file, img;
if (file = this.files[0]) {
img = new Image();
img.onload = function () {
if (this.width < 255 || this.height < 330) {
alert('{% trans "The file dimension should be at least 255 x 330 pixels." %}');
}
};
img.src = _URL.createObjectURL(file);
}
}

+Get value of selected radio button (Aug. 1, 2016, 3:46 p.m.)

$('input[type="radio"][name="machines"]:checked').val();

+Allow only numeric 0-9 in inputbox (April 25, 2016, 9:18 p.m.)

$(".numeric-inputs").keydown(function(event) {
// Allow only backspace, delete, tab, ctrlKey
if ( event.keyCode == 46 || event.keyCode == 8 || event.keyCode == 9 || event.ctrlKey ) {
// let it happen, don't do anything
}
else {
// Ensure that it is a number and stop the keypress
if ((event.keyCode >= 48 && event.keyCode <= 57) || (event.keyCode >= 96 && event.keyCode <= 105)) {
// let it happen, don't do anything
} else {
event.preventDefault();
}
}
});

+Access parent of a DOM using the (event) parameter (April 25, 2016, 1:47 p.m.)

var membership_id = $(e.target).parent().attr('id');

+Prevent big files from uploading (March 5, 2016, 12:08 a.m.)

$('#id_certificate').bind('change', function() {
if(this.files[0].size > 1048576) {
alert("{% trans 'The file size should be less than 1 MB.' %}");
$(this).val('');
}
});

+Background FullScreen Slider + Fade Effect (Feb. 5, 2016, 7:21 p.m.)

jQuery:

$(document).ready(function() {
var images = [];
var titles = [];
{% for slider in sliders %}
images.push('{{ slider.image.url }}');
titles.push('{{ slider.image.motto_en }}');
{% endfor %}

var image_index = 0;
$('#iind-slider').css('background-image', 'url(' + images[0] + ')');
setInterval(function() {
image_index++;
if(image_index == images.length) {
image_index = 0;
}
$('#iind-slider').fadeOut('slow', function() {
$(this).css('background-image', 'url(' + images[image_index] + ')');
$(this).fadeIn('slow');
});
}, 4000);
});
-----------------------------------------------------------------
CSS:

#iind-slider {
width: 100%;
height: 100vh;
background: no-repeat fixed 0 0;
background-size: 100% 100%;
}

+Convert Seconds to real Hour, Minutes, Seconds (Feb. 1, 2016, 10:54 p.m.)

// Convert seconds to real Hour:Minutes:Seconds
function secondsTimeSpanToHMS(s) {
let h = Math.floor(s / 3600); // Get whole hours
s -= h * 3600;
let m = Math.floor(s / 60); // Get remaining minutes
s -= m * 60;
return h + ":" + (m < 10 ? '0' + m : m) + ":" + (s < 10 ? '0' + s : s); // Zero padding on minutes and seconds
}


setInterval(function() {
var left_time = secondsTimeSpanToHMS(server_left_time);
$('#left-time').find('span').html(left_time);
server_left_time -= 1;
}, 1000);

+Error - TypeError: $.browser is undefined (Jan. 15, 2016, 1:53 a.m.)

Find this script file and include it after the main jquery file:
jquery-migrate-1.0.0.js

+Multiple versions of jQuery in one page (Jan. 8, 2016, 5:54 p.m.)

1- Load the jquery libraries like the example:

<script type="text/javascript" src="{% static 'iind/js/jquery-1.7.1.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_7_1 = $.noConflict(true);
</script>
<script type="text/javascript" src="{% static 'iind/js/jquery-1.11.3.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_11_3 = $.noConflict(true);
</script>
------------------------------------------------------------------------------------
2- Then use them as follows:

jQuery_1_11_3(document).ready(function() {
jQuery_1_11_3(".dropdown").hover(
function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeIn("fast");
jQuery_1_11_3(this).toggleClass('open');
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
}, function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeOut("fast");
jQuery_1_11_3(this).toggleClass('open');
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
});
});
------------------------------------------------------------------------------------
And change the last line of jQuery libraries like this:

Change
}(jQuery, window, document));

To:
}(jQuery_1_11_3, window, document));
------------------------------------------------------------------------------------
And for bootstrap.min.js, I had to change this long line: (The last word, jQuery needed to be changed):

if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery)

To:
if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery_1_11_3)
------------------------------------------------------------------------------------

+Redirect Page (Dec. 20, 2015, 11:57 a.m.)

// similar behavior as an HTTP redirect
window.location.replace("http://stackoverflow.com");

// similar behavior as clicking on a link
window.location.href = "http://stackoverflow.com";

$(location).attr('href','http://yourPage.com/');

+Smooth scrolling when clicking an anchor link (Sept. 10, 2015, midnight)

var $root = $('html, body');
$('a').click(function () {
$root.animate({
scrollTop: $($.attr(this, 'href')).offset().top
}, 1500);
return false;
});

+Attribute Selector (Aug. 26, 2015, 4:01 p.m.)

$("[id=choose]")
---------------------------------------------------------------------------------------------
$( "input[value='Hot Fuzz']" ).next().text( "Hot Fuzz" );
---------------------------------------------------------------------------------------------
$("ul").find("[data-slide='" + current + "']");

$("ul[data-slide='" + current +"']");
---------------------------------------------------------------------------------------------

+Underscore Library (Aug. 26, 2015, 2:01 p.m.)

if(_.contains(intensity_filters, intensity_value)) {
intensity_filters = _.without(intensity_filters, intensity_value);
}
---------------------------------------------------------------------------------------------

+Get a list of checked/unchecked checkboxes (Aug. 26, 2015, 1:51 p.m.)

var selected = [];
$('#checkboxes input:checked').each(function() {
selected.push($(this).attr('name'));
});
------------------------------------------------------------------------------------------------------
And for getting the unchecked ones:
$('#checkboxes input:not(:checked)').each(function() {} });

+Comma Separate Number (Aug. 14, 2015, 11:59 a.m.)

function commaSeparateNumber(val) {
while (/(\d+)(\d{3})/.test(val.toString())) {
val = val.toString().replace(/(\d+)(\d{3})/, '$1' + ',' + '$2');
}
return val;
}

+Hide a DIV when the user clicks outside of it (Aug. 12, 2015, 2:42 p.m.)

$(document).mouseup(function (e) {
var container = $("#my-cart-box");
if (!container.is(e.target) && container.has(e.target).length === 0) {
container.hide();
}
});

+Reset a form in jquery (Aug. 1, 2015, 1:19 a.m.)

$('#the_form')[0].reset()

+Event binding on dynamically created elements (Aug. 14, 2015, 12:06 a.m.)

Add Click event for dynamically created tr in table

$('.found-companies-table').on('click', 'tr', function() {
alert('hi');
});
-----------------------------------------------------------------------------------------------------
$("body").on("mouseover mouseout", "select", function(e){

// Do some code here

});
-----------------------------------------------------------------------------------------------------
$(staticAncestors).on(eventName, dynamicChild, function() {});
-----------------------------------------------------------------------------------------------------
$('body').on('click', '.delete-order', function(e) { });

+Select all (table rows) except first (July 18, 2015, 3:12 a.m.)

$("div.test:not(:first)").hide();
--------------------------------------------------------------------------
$("div.test:not(:eq(0))").hide();
--------------------------------------------------------------------------
$("div.test").not(":eq(0)").hide();
--------------------------------------------------------------------------
$("div.test:gt(0)").hide();
--------------------------------------------------------------------------
$("div.test").gt(0).hide();
--------------------------------------------------------------------------
$("div.test").slice(1).hide();

+Deleting all rows in a table (July 15, 2015, 3:29 p.m.)

$("#mytable > tbody").html("");
---------------------------------------- OR ----------------------------------------
$("#myTable").empty();
---------------------------------------- OR ----------------------------------------
$("#myTable").find("tr:gt(0)").remove();
---------------------------------------- OR ----------------------------------------
$("#myTable").children( 'tr:not(:first)' ).remove();

+Plugins (April 6, 2016, 8:13 p.m.)

http://demo.evatheme.com/item/?product=journey_html
http://tutorialzine.com/2013/04/50-amazing-jquery-plugins/
http://www.unheap.com/
https://www.freshdesignweb.com/image-hover-effects/
http://apycom.com/webdev/top-creative-and-beautiful-bootstrap-slider-samples-2016-199.html
http://cssslider.com/jquery-content-slider-31.html
http://joaopereirawd.github.io/animatedModal.js/
http://www.jqueryscript.net/demo/Material-Inspired-Morphing-Button-with-jQuery-velocity-js-Quttons/
http://www.jqueryscript.net/demo/Modal-Like-Sliding-Panel-with-jQuery-CSS3/
http://www.jqueryscript.net/menu/Stylish-Off-canvas-Sidebar-Menu-with-jQuery-CSS3.html
http://plugins.compzets.com/animatescroll/
https://1stwebdesigner.com/jquery-gallery/
https://tympanus.net/codrops/2012/09/03/bookblock-a-content-flip-plugin/
http://www.eyecon.ro/spacegallery/
http://keith-wood.name/imageCube.html
http://www.jqueryscript.net/demo/Flexible-3D-Flipping-Cube-Pluigin-HexaFlip/index3.html
http://tympanus.net/Development/BookBlock/
http://tympanus.net/Development/ImageTransitions/
http://renatorib.github.io/janimate/
http://git.blivesta.com/rippler/
http://www.jqueryscript.net/demo/Simple-jQuery-Plugin-For-Responsive-Sliding-View-SimpleSlideView/
http://tympanus.net/TipsTricks/DirectionAwareHoverEffect/
http://www.jqueryscript.net/demo/jQuery-Plugin-For-Circular-Popup-Html-Elements-Radiate-Elements/
http://www.jqueryscript.net/demo/jQuery-3D-Animation-Plugin-With-HTML5-CSS3-Transforms-jworld/
http://lab.ejci.net/favico.js/
http://www.jqueryscript.net/demo/jQuery-Plugin-To-Auto-Scroll-Down-A-Web-Page-Hungry-Scroller/
http://www.jqueryscript.net/demo/jQuery-Plugin-To-Auto-Scroll-Down-Html-Page-Slow-Auto-Scroll/
https://haltu.github.io/muuri/
https://ilkeryilmaz.github.io/timelinejs/
http://www.thepetedesign.com/demos/tiltedpage_scroll_demo.html
https://github.com/soundar24/roundSlider

+Focus the first input in your form (June 30, 2015, 3:05 p.m.)

$('.forms').find("input[type!='hidden']").first().focus();

+jQuery `data` vs `attr`? (Aug. 21, 2014, 3:03 p.m.)

If you are passing data to a DOM element from the server, you should set the data on the element:

<a id="foo" data-foo="bar" href="#">foo!</a>
The data can then be accessed using .data() in jQuery:

console.log( $('#foo').data('foo') );
//outputs "bar"
However when you store data on a DOM node in jQuery using data, the variables are stored in on the node object. This is to accommodate complex objects and references as storing the data on the node element as an attribute will only accommodate string values.

Continuing my example from above:
$('#foo').data('foo', 'baz');

console.log( $('#foo').attr('data-foo') );
//outputs "bar" as the attribute was never changed

console.log( $('#foo').data('foo') );
//outputs "baz" as the value has been updated on the object
Also, the naming convention for data attributes has a bit of a hidden "gotcha":

HTML:
<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
JS:
console.log( $('#bar').data('fooBarBaz') );
//outputs "fizz-buzz" as hyphens are automatically camelCase'd
The hyphenated key will still work:

HTML:
<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
JS:
console.log( $('#bar').data('foo-bar-baz') );
//still outputs "fizz-buzz"
However the object returned by .data() will not have the hyphenated key set:

$('#bar').data().fooBarBaz; //works
$('#bar').data()['fooBarBaz']; //works
$('#bar').data()['foo-bar-baz']; //does not work
It's for this reason I suggest avoiding the hyphenated key in javascript.

The .data() method will also perform some basic auto-casting if the value matches a recognized pattern:

HTML:
<a id="foo"
href="#"
data-str="bar"
data-bool="true"
data-num="15"
data-json='{"fizz":["buzz"]}'>foo!</a>
JS:
$('#foo').data('str'); //`"bar"`
$('#foo').data('bool'); //`true`
$('#foo').data('num'); //`15`
$('#foo').data('json'); //`{fizz:['buzz']}`
This auto-casting ability is very convenient for instantiating widgets & plugins:

$('.widget').each(function () {
$(this).widget($(this).data());
//-or-
$(this).widget($(this).data('widget'));
});
If you absolutely must have the original value as a string, then you'll need to use .attr():

HTML:
<a id="foo" href="#" data-color="ABC123"></a>
<a id="bar" href="#" data-color="654321"></a>
JS:
$('#foo').data('color').length; //6
$('#bar').data('color').length; //undefined, length isn't a property of numbers

$('#foo').attr('data-color').length; //6
$('#bar').attr('data-color').length; //6

+Leading colon in a jQuery selector (Aug. 21, 2014, 3:01 p.m.)

What's the purpose of a leading colon in a jQuery selector?
The :input selector basically selects all form controls (input, textarea, select and button elements) where as input selector selects all the elements by tag name input.

Since radio button is a form element and also it uses input tag so they both can be used to select radio button. However both approaches differ the way they find the elements and thus each have different performance benefits.

+Colon and question mark (Aug. 21, 2014, 3 p.m.)

What is the meaning of the colon (:) and question mark (?) in jquery?
That's an inline if.
If true, do the thing after the question mark, otherwise do the thing after the colon. The thing before the question mark is what you're testing.

+Commands and examples (Aug. 21, 2014, 2:57 p.m.)

$('#toggle_message').attr('value', 'Show')
-------------------------------------------------------
$('#message').toggle('fast');
-------------------------------------------------------
$(document).ready(function() {});
$(window).load(function() {});
-------------------------------------------------------
$(window).unload(function() {
alert('You\'re leaving this page');
});
This alert will be raised when move to another window by clicking on a link or click on the back or preivous buttons of browser, or when you close the tab.
-------------------------------------------------------
$('*').length();
Returns the number of all the elements in the page.
-------------------------------------------------------
$('p:first')
$('p:last')
$('input:button')
$('input[type="email"]')
-------------------------------------------------------
$(':text').focusin(function() {});
$(':text').blur(function() {});
-------------------------------------------------------
$('#email').attr('value', 'Write your email address').focus(function() {
# Some code
}).blur(function() {
# Some code
});
-------------------------------------------------------
search_name = jQuery.trim($(this).val());
$("#names li:contains('" + search_name + "')").addClass('.highlight');
-------------------------------------------------------
$('input[type="file"]').change(function() {
$(this).next().removeAttr('disabled');
}).next().attr('disabled', 'disabled');
-------------------------------------------------------
$('#menu_link').dbclick(function() {});
-------------------------------------------------------
$('#click_me').toggle(function() {
# Code here
}, function() {
# Code here
};
-------------------------------------------------------
var scroll_pos = $('#some_text').scrollTop();
-------------------------------------------------------
$('#some_text').select(function() {});
-------------------------------------------------------
$('a').bind('mouseenter mouseleave', function() {
$(this).toggleClass('bold');
});
bind() is specified to use for series of events.
-------------------------------------------------------
$('.hover').mousemove(function(e) {
$('some_div').text('x: ' + e.clientX + ' y: ' + e.clientY);
});
-------------------------------------------------------
Hover over description:
$('.hover').mousemove(function(e) {
var hovertext = $(this).attr('hovertext');
$('#hoverdiv').text(hovertext').show();
$('#hoverdiv').css('top', e.clientY+10).css('left', e.clientX+10);
}).mouseout(function() {
$('#hoverdiv').hide();
});

Create an empty div with id="hovertext" in HTML, and style it in CSS.
-------------------------------------------------------
.addClass('class1 class2 class3')
-------------------------------------------------------
$(":input').focus(function() {
$(this).toggleClass('highlight');
});
-------------------------------------------------------
Traversing using .each():

$('input[type="text"]').each(function(index) {
alert(index);
});
This index argument prints 0, 1, 2, ... per the items which are selected by .each statement/function.
-------------------------------------------------------
These two statements do the same thing:
$('.names li:first').append('Hello');
$('.names').find('li').first().append('Hello');

if($(this).has('li').length == 0) { }

if($(this).has(':contains')) {}
-------------------------------------------------------
$(this).nextAll().toggle();
This is useful when you want to toggle a sub-menu using the first/top item.
-------------------------------------------------------
$(this).hide('slow', 'linear', function() {});
.slideup()
.slideLow()
.slideToggle()

.stop() Will cause the animation of slide effect to stop
-------------------------------------------------------
.fadeTo(100, 0.4, function() {})
$('.fadeto').not(this).fadeTo(100, 0.4);
-------------------------------------------------------
$('.fadeto').css('opacity', '0.4');
$('.fadeto').mouseover(function() {
$(this).fadeTo(100, 1);
$('.fadeto').not(this).fadeTo(100, 0.4);
});
-------------------------------------------------------
append()
appendTo()
clone()
-------------------------------------------------------
$('html, body').animate({scrollTop: 0}, 10000);
-------------------------------------------------------
$('#terms').scroll(function() {
var textarea_height = $(this)[0].scrollHeight();
var scroll_height = textarea_height - $(this).innerHeight();

var scroll_top = $(this).scrollTop();
});
-------------------------------------------------------
var names = ['Alex', 'Billy', 'Dale'];
if (jQuery.inArray('Alex', names) != '-1') {
alert('Found');
}
-------------------------------------------------------
$.each(names, function(index, value) {})
-------------------------------------------------------
setInterval(function() {
var timestamp = jQuery.now();
$("#time").text(timestamp);
}, 1);
-------------------------------------------------------
(function($) {
$.fn.your_new_function_name = function() {}
})(jQuery)
-------------------------------------------------------
Options:
$('#drag').draggable({axis: 'x'});
$('#drag').draggable({containment: 'document'});
$('#drag').draggable({containment: 'window'});
$('#drag').draggable({containment: 'parent'});
$('#drag').draggable({containment: [0, 0, 200, 200]});
$('#drag').draggable({cursor: 'pointer'});
$('#drag').draggable({opacity: 0.6});
$('#drag').draggable({grid: [20, 20]});
$('#drag').draggable({revert: true});
$('#drag').draggable({revertDuration: 1000});
Events:
$('#drag').draggable({start: function() {}});
$('#drag').draggable({drag: function() {}});
$('#drag').draggable({stop: function() {}});
-------------------------------------------------------
$('#drop').droppable({hoverClass': 'border'});
$('#drop').droppable({tolerance': 'fit'});
$('#drop').droppable({tolerance': 'intersect'});
$('#drop').droppable({tolerance': 'pointer'});
$('#drop').droppable({tolerance': 'touch'});
$('#drop').droppable({accept': '.name'});
$('#drop').droppable({over': function() {}});
$('#drop').droppable({out': function() {}});
$('#drop').droppable({drop': function() {}});
-------------------------------------------------------
$('#names').sortable({containment: 'parent'});
$('#names').sortable({tolerance: 'pointer'});
$('#names').sortable({cursor: 'pointer'});
$('#names').sortable({revert: true});
$('#names').sortable({opacity: 0.6});
$('#names').sortable({connectWith: '#palces, #names'});
$('#names').sortable({update: function() {}});
-------------------------------------------------------
Resizable:
This required a css file `jquery-ui-custom.css`

$('#box').resizable({containment: 'document'});
$('#box').resizable({animate: true});
$('#box').resizable({ghost: true});

$('#box').resizable({animateDuration: 'slow'});
`slow`, `medium`, `fast`, `normal`, `1000`

$('#box').resizable({animateEasing: 'swing'});
`swing`, `linear`

$('#box').resizable({aspectRatio: true});
`0.4`, `2/5`, `9/10`

$('#box').resizable({autoHide: true});

$('#box').resizable({handles: 'n, e, se');
n=North, e=East, w=West, s=South, or `all`
If you do not specify `all`, you can not resize the box from left or top, as they are so closed to the browser.

$('#box').resizable({grid: [20, 20]});
$('#box').resizable({minHeight: 200);
$('#box').resizable({maxHeight: 100);
$('#box').resizable({minWidth: 200);
$('#box').resizable({maxWidth: 100);
-------------------------------------------------------
Accordion:
$('#content').accordion({fillSpace: true})
$('#content').accordion({icons: {'header': 'ui-icon-plus', 'headerSelected': 'ui-icon-minus'}})
$('#content').accordion({collabsable: true})
$('#content').accordion({active: 2})
`false`
-------------------------------------------------------
Dialog:
$('#dialog').dialog()
$('#dialog').attr('title', 'Saved').text('Settings were saved.').dialog();
.dialog({buttons: {'OK': function() {
$(this).dialog('close');
});
closeOnEscape: true
draggable: false
resizable: false
show: 'fade', 'bounce'
modal: true
position: 'top', 'top, left', 'bottom', 'top, center', [100, 100]
-------------------------------------------------------
Progressbar:

var val = 0;
var interval = setInterval(function() {
val = val + 1;
$('#pb').progressbar({value: val});
$('#percent').text(val + '%');
if (val == 100) {
clearInterval(interval);
}
});
----------------------------------------------

$("#header_menus img:not(.hover_menus)").mouseenter(function() {
$(this).hide();
$("#" + $(this).attr('data-hover')).show();
});

KDE
+KDE - Location of User Wallpapers (Oct. 23, 2019, 10:50 a.m.)

~/.local/share/wallpapers/

+Editing KDE Application Launcher Menus (May 11, 2015, 5:31 p.m.)

Use `kmenuedit`

+Delete session (March 20, 2015, 11:36 a.m.)

Delete the files in:
rm ~/.kde/share/config/session/*

And delete the file:
~/.kde/share/config/ksmserverrc

Kivy
+Create a package for IOS (Nov. 4, 2015, 6:06 a.m.)

http://kivy.org/docs/guide/packaging-ios.html

sudo apt-get install autoconf automake libtool pkg-config

+PyCharm Completion (March 19, 2015, 9:25 a.m.)

https://github.com/kivy/kivy/wiki/Setting-Up-Kivy-with-various-popular-IDE%27s
---------------------------------------------------------------------------------------------
1-Download this jar plugin:
https://github.com/Zen-CODE/kivybits/blob/master/IDE/PyCharm_kv_completion.jar?raw=true

2-On Pycharm’s main menu, click "File" -> Import Settings

3-Select this file and PyCharm will present a dialog with filetypes ticked. Click OK.

4-You are done. Restart PyCharm

+Android API (Feb. 12, 2015, 9:54 p.m.)

http://developer.android.com/reference/android/speech/tts/TextToSpeech.html
I have this class in Java docs:
android.speech.tts.TextToSpeech

And in python it is:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech')

Baed on these, I thought for getting another class in Java (android.speech.tts.TextToSpeech.Engine) I had to:
Engine = autoclass('android.speech.tts.TextToSpeech.Engine')

But I got this error at runtime on my cellphone and the app would not open:
java.lang.ClassNotFoundException: android.speech.tts.TextToSpeech.Engine

I even could not access `Engine` using the pythonic way either:
TextToSpeech.Engine

I had to access the class by:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech$Engine')
--------------------------------------------------------------------------------------------
Python Dictionaries = Java HashMap:

Java:
HashMap<String, String> phoneBook = new HashMap<String, String>();
phoneBook.put("Mike", "555-1111");
phoneBook.put("Lucy", "555-2222");
phoneBook.put("Jack", "555-3333");

Python:
phoneBook = {}
phoneBook = {"Mike":"555-1111", "Lucy":"555-2222", "Jack":"555-3333"}

And for implementing it Kivy:
HashMap = autoclass('java.util.HashMap')
hash_map = HashMap()
hash_map.put(key, value)
---------------------------------------------------------------------------------------------
To access nested classes, use $ like: autoclass('android.provider.MediaStore$Images$Media').
---------------------------------------------------------------------------------------------

+Sign apk files (Oct. 4, 2015, 11:42 a.m.)

https://developer.android.com/tools/publishing/app-signing.html#studio

1-Generate a private key using keytool. For example:
$ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000
This example prompts you for passwords for the keystore and key, and to provide the Distinguished Name fields for your key. It then generates the keystore as a file called my-release-key.keystore. The keystore contains a single key, valid for 10000 days. The alias is a name that you will use later when signing your app.

2-Compile your app in release mode to obtain an unsigned APK:
buildozer android release

3-Sign your app with your private key using jarsigner:
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore my_application.apk alias_name
This example prompts you for passwords for the keystore and key. It then modifies the APK in-place to sign it. Note that you can sign an APK multiple times with different keys.

4-Verify that your APK is signed. For example:
jarsigner -verify -verbose -certs my_application.apk

5-Align the final APK package using zipalign.
The zipalign does not exist in Synaptic Package Manager, it exists in AndroidSD Build Tools. Use locate to find `zipalign` and create a symbolic link in /usr/bin:
ln -s /home/moh3en/Programs/Android/Development/android-sdk-linux/build-tools/android-5.0/zipalign /usr/bin/
zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk
---------------------------------------------------------------------------------------------
Example:

buildozer android release

jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore excludes/my-release-key.keystore bin/NimkatOnline-1.2.4-release-unsigned.apk mohsen_hassani

jarsigner -verify -verbose -certs bin/NimkatOnline-1.2.4-release-unsigned.apk

zipalign -v 4 bin/NimkatOnline-1.2.4-release-unsigned.apk bin/NimkatOnline-1.2.4.apk

+Label (Feb. 12, 2015, 9:52 p.m.)

When creating a label, by default, it places at the bottom left corner with some part of it hidden, but by changing its `size` property it will be solved:
size: self.texture_size

Scrolling a Label:
Label:
text: str('A very long text' * 100)
font_size: 50
text_size: self.width, None
size_hint_y: None
height: self.texture.size[1]

+FloatLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to RelativeLayout, except now position is relative to window, and not Layout.
Thus in FloatLayout, pos = 0, 0 refers to lower-left corner.

+RelativeLayout (Feb. 12, 2015, 9:51 p.m.)

Each child widget size and position has to be give.
size_hint, pos_hint: numbers relative to Layout.
If those two parameters are used, it does not make any difference if RelativeLayout or FloatLayout are used, as both will yield the same result.

+GridLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to StackLayout 'lr-tb'
Either cols or rows has to be given and the Layout adjusts so the given number is the maximum number of cols or rows.

+Canvas (Feb. 12, 2015, 9:51 p.m.)

Canvas refers to graphical instructions.
The instructions could be non-visual, called context instructions, or visual, called vertex instructions.
An example of a non-visual instruction would be to set a color.
An example of a visual instruction would be draw a rectangle.

+StackLayout (Feb. 12, 2015, 9:50 p.m.)

1-More flexible than BoxLayout
2-Orientations:
right to left or left to right
top to bottom or bottom to top
rl-bt, rl-tb, lr-bt, lr-tb (Row-wise)
bt-rl, bt-lr, tb-rl, tb-lr (Column-wise)

+Snippets (Feb. 12, 2015, 9:50 p.m.)

pos_hint: {'x': .1}
size_hint: [.2, 2]
pos_hint: {'center_x': .3}
----------------------------------------------------------------------------------------------
textinput.bind(text=label.setter('text'))
----------------------------------------------------------------------------------------------
in kv file:
TextInput:
on_text: my_label.color = [random.random() for i in xrange(3)] + [1]
----------------------------------------------------------------------------------------------
center: self.parent.center
----------------------------------------------------------------------------------------------

+on_touch_up vs on_release (Feb. 12, 2015, 9:49 p.m.)

When using on_touch_up event with partial, you have to pass three arguments to the calling method:

Example:
button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

@staticmethod
def speak_word(word, arg1, arg2): # I don't know yet what these two extra args are used for.
print(word)

After touching the button, all the same buttons on the page are also triggered. You have to solve it using something like this:
on_touch_up: vibrate() if self.collide_point(*args[1].pos) else None
******
But using on_release, two args are passed:
button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

@staticmethod
def speak_word(word, button):
print(word)

After clicking, the only button which has been touched, will be triggered. That's good!

+Partial (Feb. 12, 2015, 9:49 p.m.)

In Kivy, you register a button release callback with the “bind()” function:
myButton.bind(on_release=my_button_release)
But the signature of the “on_release” method is “on_release(self)”, which means that the method you provide will receive only one parameter — the button that generated the event. When you release the button, Kivy will invoke your callback method and pass in the button that you released.

So does this mean we can’t pass user-defined parameters to our handlers? Does it mean we need to use globals or a bunch of specialized methods to write our button handlers? No, this is where Python’s functools.partial comes in handy.

To oversimplify, partial allows you to create a function with one set of arguments that calls another function with a different set of arguments. For example, consider the following function that takes two arguments:

def addTwoNumbers(x, y):
print "x: %d, y: %d" % (x, y)
return x+y
You can create a partial from this that automatically supplies one or more of the arguments. Let’s create one that supplies ’1′ for ‘x’:

addOne = partial(addTwoNumbers, 1)
Which you would then invoke as such:

>>> #We pass in '2' for 'y' here. The partial fills in '1' for 'x'
...
>>> addOne(2)
x: 1, y: 2
3
*****************
Let’s create a function that can set any label to any text:

def changeLabel(label, text, button):
#Kivy gives us 'button' to let us know which button
# caused the event, but we don't use it
label.text = text

In our UI setup, we can then bind two different buttons to this handler, creating partials that supply values for the extra arguments:

startButton = Button(text='Start Car')
stopButton = Button(text='Stop Car')

startButton.bind(
on_press=partial(
changeLabel,
statusLabel,
"Starting Car..."))

stopButton.bind(
on_press=partial(
changeLabel,
statusLabel,
"Stopping Car..."))
Now, by inspecting the setup code, it’s fairly easy to see what the UI does when various events occur. We can even extend this further to perform an action after setting the label:

def changeLabelAndRun(label, text, command, button):
label.text = text
command()


This allows our setup code to specify a UI behavior and trigger an action (assume ‘startCar’ and ‘stopCar’ have been defined as functions elsewhere):

startButton.bind(
on_press=partial(
changeLabelAndRun,
statusLabel, "Starting Car...",
startCar))

stopButton.bind(
on_press=partial(
changeLabelAndRun,
statusLabel, "Stopping Car...",
stopCar))
Unlike C, there’s no casting, no packing things into structs, and it’s easy to extend for different needs. Snazzy! This might not scale perfectly to complicated UI interactions, but it greatly simplifies straightforward event processing, making it easier to see at a glance what the application is doing.

+BoxLayout vs. GridLayout (Sept. 9, 2015, 12:47 p.m.)

The widgets in a BoxLayout can have different width and height, but in a GridLayout, each row or column should have the same size.

The widgets in BoxLayout are placed from bottom to top, but those in a GridLayout are placed from top to bottom.

In a BoxLayout the widgets can not be placed next to each other! I mean, they are placed one widget per row (if orientation is vertical) or column (if orientation is horizontal)

+Background Image for Button (Feb. 12, 2015, 9:48 p.m.)

background_normal: 'home_button.png'
background_down: 'home_button_down.png'

+DropDown (Feb. 12, 2015, 9:48 p.m.)

1-First of all, make sure that dropdown doesn't get called while widget is not on screen. That is, you have to only instantiate it, do not use it for add_widget or anything so that it's called.

2-For getting the data which is passed through `a_button.on_release: root.select('the_value')`, you have to use:
on_select: select_controller(args[1])
on the DropDown. Here is the exmaple:
<MainDropDown@DropDown>:
on_select: select_controller(args[1]) # Try printing `args` to see the whole items.
Button:
text: 'Update Database'
on_release: root.select('update_db')

+Spinner vs. DropDown (Sept. 9, 2015, 12:44 p.m.)

Spinner is a widget that provides a quick way to select one value from a set. In the default state, a spinner shows its currently selected value. Touching the spinner displays a dropdown menu with all other available values from which the user can select a new one.

+Commands (Feb. 12, 2015, 9:47 p.m.)

buildozer android debug

+Buildozer (Feb. 12, 2015, 9:47 p.m.)

-------------------Installation:-------------------
1-git clone https://github.com/kivy/buildozer
2-Activate virtualenv (and test if the default `python` command will lead to python version 2.7) because buildozer needs python2.7
3-cd_to_downloaded_buildozer
4-python setup.py install
----------------------------------------------------------------------------
buildozer init
buildozer android debug
buildozer android logcat
adb logcat
----------------------------------------------------------------------------
AndroidSDK and AndroidNDK are needed for buildozer, if you have already downloaded them, provide the paths like these:
android.ndk_path = /home/moh3en/Programs/Android/Development/android-ndk-r9c
android.sdk_path = /home/moh3en/Programs/Android/Development/android-sdk-linux

if not, buildozer will try to download them, but unfortunately because of the embargo, they won't get downloaded since the source originates from google.com. So you have to download them using proxy and untar/unzip them somewhere.
----------------------------------------------------------------------------
sudo adb uninstall com.nimkatonline.en
sudo adb install bin/NimkatOnline-1.2.0.apk
--------------------------------------------------------------------------

+Installing python packages (Feb. 12, 2015, 9:46 p.m.)

For installing python packages use this command:
./distribute.sh -m "kivy requests==2.1.0 SQLAlchemy"

You will need these environment variables:
export ANDROIDSDK="/home/mohsen/Programs/android-sdk-linux"
export ANDROIDNDK="/home/mohsen/Programs/android-ndk-r8c"
export ANDROIDNDKVER=r8c
export ANDROIDAPI=14

+Python Android Path (Feb. 12, 2015, 9:46 p.m.)

This is the path to the python used for android. Use this path for managing (installing or uninstalling) packages which are going to be installed, packed and used for your app.
python-for-android/dist/default/private/lib/python2.7/site-packages

+Error ==> Source resource does not exist: python-for-android/dist/default/project.properties (Feb. 12, 2015, 9:43 p.m.)

export ANDROIDAPI=15

+Chat (Feb. 12, 2015, 9:42 p.m.)

<Mohsen_Hassani> Hello guys. I am very new to Kivy. I am using psycopg2 to read data from my remote VPS. I wanted to know if it will work after making apk too?
<brousch> Mohsen_Hassani: Pure Python modules will work fine. I'm not sure if psycopg2 is pure Python
<kovak> Mohsen_Hassani: the first step is to write a recipe for python-for-android to see if you can compile for ARM without any problems
<kovak> I think psycopg2 has C bits
<kovak> if it compiles in arm no problem you are good to go, if not you may need to patch the source
<brousch> However, except in very rare cases, your Android app should not be communicating directly with your database server. There should be a proper API on top of that database
-------------------------------------------------------------------------------
<tito> Mohsen_Hassani: the best shot you have is to put your tgz into a directory, go into the directory, and start python -m SimpleHTTPServer
<tito> then do: URL_python=http://localhost:8000/Python-2.7.2.tar.bz2 URL_hostpython=http://localhost:8000/Python-2.7.2.tar.bz2 ./distribute.sh -m 'openssl pil kivy'

+Building the application (Feb. 12, 2015, 9:36 p.m.)

cd dist/default
./build.py --permission INTERNET --orientation sensor --package com.mohsenhassani.notes --name My\ Notes --version 1.0 --dir ~/Projects/kivy_projects/notes/ debug
----------------------------------------------------------------------------
Install the debug apk to your device:
adb install bin/touchtracer-1.0-debug.apk
----------------------------------------------------------------------------
/usr/bin/python2.7 build.py --name 'My Notes' --version 1.0 --package com.mohsenhassani.notes --private /home/mohsen/Projects/kivy_projects/notes/.buildozer/android/app --s
dk 14 --minsdk 8 --permission INTERNET --icon /home/mohsen/Projects/kivy_projects/notes/./static/icon.png --orientation sensor debug
----------------------------------------------------------------------------

+Installation (July 17, 2015, 1:26 a.m.)

Installation:
http://kivy.org/docs/installation/installation-linux.html#linux-run-app

---------------------------------------------------------------------------------------------
Installation Steps:
1-apt-get install python-gst0.10-dev python-gst-1.0 freeglut3-dev libsdl-image1.2-dev libsdl-ttf2.0-dev libsdl-mixer1.2-dev libsmpeg-dev libportmidi-dev libswscale-dev libavformat-dev libavcodec-dev libv4l-dev libserf-1-1 libsvn1 subversion openjdk-7-jdk python-pygame
2-Create and activate a virtualenv
3-easy_install requests
4-easy_install -U setuptools
5-pip install cython==0.20
6-pip install pygments
7-pip install --allow-all-external pil --allow-unverified pil

8.1-For installing next step (pygame) you will need to link a file or get the following error. So first create the symlink:
fatal error: linux/videodev.h: No such file or directory:
sudo ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

8.2-pip install pygame (It won't be found or downloaded! You need to download the tar file from www.pygame.org/download.shtml and install it using pip install <the_downloaded_tar_file>.)

9-pip install kivy

Kotlin
+Objects Declarations and Companion Objects - Singleton (May 22, 2019, 11:58 p.m.)

Singleton:
When we have just ONE INSTANCE of a class in the whole application.

object MySingleton

object MySingleton {
fun someFunction(...) {...}
}

And then use it:
MySingleton.someFunction(...)

-----------------------------------------------------

In java, we define SINGLETON, by using the keyword "static" variables and methods.

In Kotlin we use "object" for declaring a class.
Contrary to a class, an object can’t have any constructor, but init blocks are allowed if some initialization code is needed.


object Customer {
var id: Int = -1 // Behaves like STATIC variable

init {

}

fun registerCustomer() { // Behaves like STATIC method

}
}


We don't need to instantiate the class! We call it without creating instance.
Customer.id = 27
Customer.registerCustomer()

-----------------------------------------------------

Companion Objects are same as "object" but declared within a class.

class MyClass {
companion object {
var count: Int = -1 // Behaves like STATIC variable

fun typeOfCustomers(): String { // Behaves like STATIC method
return "American"
}
}
}


MyClass.count

MyClass.typeOfCustomers()

-----------------------------------------------------

+Data class and Super class "Any" (May 22, 2019, 10:37 p.m.)

The purpose of Data class is to deal with Data, not the Objects!

---------------------------------------------------------------

var user1 = User("Mohsen", 10)
var user2 = User("Mohsen", 10)

if (user1 == user2 ) {
// returns false (They are not equal). The User class must be defined with "Any" keyword to have these variables equal.
}

class User(var name: String, var id: Int) {

}

---------------------------------------------------------------

data class User(var name: String, var id: Int) {

}

---------------------------------------------------------------

+lazy initialization (May 22, 2019, 9:09 p.m.)

// If you don't use the following "pi" variable anywhere in your codes, it is a waste of memory.
val pi: Float = 3.14f

You should use lazy initialization (lazy lambda function):
val pi: Float by lazy {
3.14f
}
When you use the "pi" variable, it will get initialized.

------------------------------------------------------------
- "Lazy initialization" was designed to prevent unnecessary initialization of objects.

- Your variables will not be initialized unless you use it in your code.

- It is initialized only once. Next time when you use it, you get the value from cache memory.

- It is thread-safe.
It is initialized in the thread where it is used for the first time.
Other threads can use the same value stored in the cache.

- The variable can be var or val.

- The variable can be nullable or non-nullable data types.

+lateinit keyword (May 22, 2019, 9:04 p.m.)

- lateinit used only with mutable data type [ var ]
- lateinit used only with non-nullable data type
- lateinit values must be initialized before you see it

class Country {
lateinit var name: String
}

+Null Safe (May 22, 2019, 8:46 p.m.)

We have a lot of null safety operators which help up avoid the NullPointerException:
?. Safe Call Operator

?: Elvis

!! Not-null Assertion

?.let { .. } Safe Call with let

------------------------------------------------------------

val name: String = null // We can't do this.

val name: String? = null // Now it will accept null values

------------------------------------------------------------

1- Safe Call (?. )
- Returns the length if "name" is not null else returns NULL
- Use it if you don't mind getting NULL value

println("The length of name is ${name?.length}") // returns null because it has null value at the top.

------------------------------------------------------------

2- Safe Call with let ( ?.let )
- It executes the block ONLY IF name is NOT NULL

name?.let {
println("The length of name is ${name.length}")
}

------------------------------------------------------------

3- Elvis-operator ( ?: )
- When we have nullable reference "name", we can say "if name is not null", use it, otherwise use some non-null value.

val len = if (name != null )
name.length
else:
-1

OR (the above code can be simplified as follow):

val len = name?.length ?: -1

------------------------------------------------------------

4- Non-null assertion operator ( !! )
// Use it when you are sure the value is NOT Null
// Throws NullPointerException if the value is found to be NULL.

println("The length of name is ${name!!.length}")

------------------------------------------------------------

+Predicates: a condition returning TRUE of FALSE (May 22, 2019, 8:35 p.m.)

"all": Do all elements satisfy the predicate/condition?

"any": Do any element in the list satisfy the predicate?

"count": Total elements that satisfy the predicate

"find", "last": Returns the FIRST/LAST element that satisfy predicate

---------------------------------------------------------------

val myNumbers = listOf( 2, 3, 4, 6, 23, 90)

check1 = myNumbers.all { it > 10 } // or all( { it > 10 } ) // Returns false

---------------------------------------------------------------

val check2: Boolean = myNumbers.any( { num -> num > 10 } ) // or { it > 10 } // Returns true

---------------------------------------------------------------

val totalCount: Int = myNumbers.count { it > 10 }

---------------------------------------------------------------

// Returns the first number that matches the predicate
val num: Int? = myNumbers.find { it > 10 }

---------------------------------------------------------------

Store lambda function as a variable:

val myPredicate = { num: Int -> num > 10 }

---------------------------------------------------------------

+Filter and Map using Lambdas (May 22, 2019, 8:21 p.m.)

val myNumbers: List<Int> listOf(2, 3, 4, 5, 23, 90)

val mySmallNums = myNumbers.filter { it < 10 } // or { num -> num < 10 }

for (num in mySmallNums) {
println(num) // Will print 2, 3, 4, 5
}

--------------------------------------------------------

val mySquareNums = myNumbers.map { it * it } // or { num -> num * num }

will return 4, 9, 16, 25, so on....

--------------------------------------------------------

val mySmallSquareNums = myNumbers.filter { it < 10 }.map { it * it }

--------------------------------------------------------

var people: List<Person> = listOf<Person>(Person(23, "Mohsen"), Person(30, "Ali"))

var names = people.map { p -> p.name } // or { it.name }

var names = people.filter { person -> person.name.startsWith("M") }.map { it.name }

--------------------------------------------------------

+Collections - Set and Hash Set (May 22, 2019, 8:11 p.m.)

// "Set" contains unique elements
// "HashSet" also contains unique elements but sequence is not guaranteed in output


// The "9"s will get unify. It means there will be only ONE 9.
var mySet = setOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Immutable, Read Only

for (element in mySet) {
println(element)
}

----------------------------------------------------------

var mySet = mutableSetOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Mutable Set, Read and Write
mySet.remove(14)
mySet.add(100)

----------------------------------------------------------

// HashSet, the sequence is not guaranteed in output.
var mySet = hashSetOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Mutable Set

----------------------------------------------------------

+Collections - Map and Hash Map (May 22, 2019, 4:41 p.m.)

// Immutable, Fixed Size, Read Only
var myMap = mapOf<Int, String>(2 to "Mohsen", 7 to "Mehdi")
myMap.put()

for (key in myMap.keys) {
println(myMap[key]) // myMap.get(key)
println("Element at Key: $key = ${myMap.get(key)}") // ${myMap[key]}
}

---------------------------------------------------------

// Mutable, Read and Write both, No Fixed Size
var myMap = HashMap<Int, String>() // You can also use mutableMapOf and hashMapOf
myMap.put(4, "Mohsen")
myMap.put(7, "Mehdi")

myMap.replace(4, "Akbar")
OR
myMap.put(4, "Akbar")

---------------------------------------------------------

+Collections - List and ArrayList (May 22, 2019, 4:16 p.m.)

Immutable Collections: Read Only Operations
- Immutable List: listOf
- Immutable Map: mapOf
- Immutable Set: setOf

Mutable Collections: Read and Write Both
- Mutable List: ArrayList, arrayListOf, mutableListOf
- Mutable Map: HashMap, hashMapOf, mutableMapOf
- Mutable Set: mutableSetOf, hashSetOf

-----------------------------------------------------------

Mutable:

var list = mutableListOf<String>("Mohsen", "Alex", "Hadi", "Mehdi")
list.add("Ali")
list.remove("Alex")
list.add(3, "Akbar")
list[2] = "Asghar"

------------------------

An array with 5 elements, all values are zero.
var myArray = Array<Int>(5) { 0 } // Mutable. Fixed Size.

myArray[0] = 32
myArray[3] = 54

println(myArray[3])


for (element in myArray) {
println(element)
}


for (index in 0..myArray.size - 1) { }

-----------------------------------------------------------

Immutable:

// Fixed Size, Read Only, Immutable
var list = listOf<String>("Mohsen", "Alex", "Hadi", "Mehdi")

-----------------------------------------------------------

ArrayList is an implementation of the MutableList interface in Kotlin:

class ArrayList<E> : MutableList<E>, RandomAccess


MutableList should be chosen whenever possible, but ArrayList is a MutableList. So if you're already using ArrayList, there's really no reason to use MutableList instead, especially since you can't actually directly create an instance of it (MutableList is an interface, not a class).

In fact, if you look at the mutableListOf() Kotlin extension method:

public inline fun <T> mutableListOf(): MutableList<T> = ArrayList()

you can see that it just returns an ArrayList of the elements you supplied.

-----------------------------------------------------------

+WITH and APPLY Lambdas (May 22, 2019, 4:14 p.m.)

fun main() {
var person = Person()

with(person) { // Using "with" you can do the same as "person.name, person.age". It seems to be neater.
name = "Mohsen"
age = 33
}

person.apply { // Using "apply" you can also call the methods.
name = "Mohsen"
age = 33
}.someMethod()
}


class Person {
var name: String = ""
var age: Int = 0

fun someMethod() {
println("Some string")
}
}

+tailrec - Tail recursive functions (May 18, 2019, 3 p.m.)

When a function is marked with the tailrec modifier the compiler optimises out the recursion, leaving behind a fast and efficient loop based version instead.

+Infix Functions (May 18, 2019, 2:24 p.m.)

Infix Functions can be a Member Function or Extension Function.
They have SINGLE parameter.
They have prefix of "infix"


All Infix functions are extension function, but all extension functions are not Infix functions.
Infix function can only have ONE parameter.

-----------------------------------------------------------

infix fun Int.greaterValue(number: Int): Int {
if (this > number)
return this
else
return number
}


Then you can use it like this:
val x = Int = 6

val greaterVal = x.greaterValue(y)

OR

val greaterVal = x greaterValue y

+Extension Functions (May 18, 2019, 2:22 p.m.)

Adds new function to the classes:
- Can "add" functions to a class without declaring it.
- The new functions added behaves like "static".

+Functions as Expressions - One line functions (May 18, 2019, 1:24 p.m.)

fun max(a: Int, b: Int): Int = if (a > b) a else b

-------------------------------------------------------------------

fun max(a: Int, b: Int): Int
= if (a > b) {
print("$a is greater")
a
} else {
print("$b is greater")
b
}

+Functions and Methods (May 18, 2019, 1:13 p.m.)

fun findArea(length: Int, breadth: Int): Int {
return length * breadth
}



fun findArea(length: Int, breadth: Int): Unit {
print(length * breadth)
}


Unit is same as Void in Java

+BREAK statement with LABELED FOR Loop (May 18, 2019, 1:09 p.m.)

myLoop@ for (i in 1..3) {
for (j in 1..3) {
println("$i $j")
if (i == 2 && j == 2)
break@myLoop
}
}

It will BREAK when reaching to "2 2" :
1 1
1 2
1 3
2 1
2 2

+do-while (May 18, 2019, 1:07 p.m.)

var i: Int = 1

do {
println(i)
i++
} while (i <= 10)

+when (May 18, 2019, 1:01 p.m.)

when (x) {
in 1..20 -> println("A message")
!in 5..9 -> println("Another message")
2 -> {

}
4 -> str = "A string value"
else -> {

}
}

+Ranges (May 18, 2019, 12:51 p.m.)

val r1 = 1..5 // 1, 2, 3, 4, 5

val r2 = 5 downTo 1 // 5, 4, 3, 2, 1

val r3 = 5 downTo 1 step 2 // 5, 3, 1

var r4 = 'a'..'z' // "a", "b", "c", .... "z"

var isPresent = 'c' in r4

var countDown = 10.downTo(1) // 10, 9, 8, .... 1

var moveUp = 1.rangeTo(10 // 1, 2, 3, ..... 10

+Class and Function Class (May 18, 2019, 12:38 p.m.)

class Person {
var name: String = ""
}

----------------------------------------------------------

var personObj = Person()
personObj.name = "Mohsen"
print("My name is ${personObje.name}")

----------------------------------------------------------

class Student constructor(name: String) {
init {
println("The student name is $name")
}
}


You can also drop the constructor:

class Student(name: String) {
init {
println("The student name is $name")
}

// Secondary constructor
constructor(name: String, id: Int): this(name) {
// The body of the secondary constructor is called after the init block
}

constructor(my_name: String, var id: Int): this(my_name) { // var is not allowed here.
// You should do the following instead of putting var at the parameters:
this.id = id
}
}

----------------------------------------------------------

By default all classes are "public" and "final" which means you can not inherit from a class.

public final class Student {
public final name: String = ""
}

You can drop "public final" keywords.

----------------------------------------------------------

For inheritance you need to make a class "open".

open class Human { }

class Student: Human() { }

----------------------------------------------------------

Overriding:

open class Animal {
open fun eat() {
println("Animal Eating")
}
}

class Dog: Animal() {
override fun eat() {
println("Dog is eating")
}

override fun eat() {
super.eat() // Better to use the next line, if used interfaces at the class definition.
super<Animal>.eat()
print("Dog is eating")
}
}

----------------------------------------------------------

Visibility Modifiers:

public // This is the default
protected
internal
private


open class Person {
private val = 1
protected val b = 2
internal val c = 3
val d = 10 // public by default
}


class Indian: Person() {
// a is not visible
// b, c, d are visible
}


----------------------------------------------------------

+Variables and Data Types (May 18, 2019, 12:34 p.m.)

var age = 33 // Int

var grade = 21.5 // Float
var myName: String // Mutable String
myName = "Mohsen"
myName = "MohseNN"

val myFamilyName = "Hassani" // Immutable String

var gender: Char = 'M'

var percentage: Double = 90.78

var marks: Float = 97.4F

var isStudying: Boolean = true

+Static Members for class (May 17, 2019, 12:03 p.m.)

Most of the programming language have concepts where classes can have static members — fields that are only created once per class and can be accessed without an instance of their containing class.

Kotlin doesn’t have static member for class, it means that you can’t create static method and static variable in Kotlin class.

Fortunately, Kotlin object can handle this. If you declare a companion object inside your class, you'll be able to call its members with the same syntax as calling static methods in Java/C#, using only the class name as a qualifier.


class MyClass {
companion object {
val info = "This is info"
fun getMoreInfo():String { return "This is more fun" }
}
}

MyClass.info // This is info
MyClass.getMoreInfo() // This is more fun


Note that, even though the members of companion objects look like static members in other languages, at runtime those are still instance members of real objects, and can, for example, implement interfaces.

+for Loop / Iteration (May 10, 2019, 11:22 a.m.)

for (item in collection) {
// body of loop
}

-------------------------------------------------------------

Iterate Through a Range:

fun main(args: Array<String>) {

for (i in 1..5) {
println(i)
}
}

-------------------------------------------------------------

If the body of the loop contains only one statement (like above example), it's not necessary to use curly braces { }.

fun main(args: Array<String>) {
for (i in 1..5) println(i)
}

-------------------------------------------------------------

for (i in 1..5) print(i)

for (i in 5 downTo 1) print(i)

for (i in 1..5 step 2) print(i)

for (i in 5 downTo 1 step 2) print(i)

-------------------------------------------------------------

Iterating Through an Array:

var language = arrayOf("Ruby", "Koltin", "Python" "Java")
for (item in language)
println(item)

-------------------------------------------------------------

Iterate through an array with an index:

var language = arrayOf("Ruby", "Koltin", "Python", "Java")

for (item in language.indices) {
// printing array elements having even index only
if (item%2 == 0)
println(language[item])
}

-------------------------------------------------------------

Iterating Through a String:

var text= "Kotlin"
for (letter in text) {
println(letter)
}


-------------------------------------------------------------
-------------------------------------------------------------

+List (May 10, 2019, 10:41 a.m.)

List is by default immutable and mutable version of Lists is called MutableList!


val list: List<String> = ArrayList()
In this case you will not get an add() method as list is immutable.

-----------------------------------------------------------------

val list: MutableList<String> = ArrayList()
Now you will see an add() method and you can add elements to list.

-----------------------------------------------------------------

MUTABLE collection:
val list = mutableListOf(1, 2, 3)
list += 4

-----------------------------------------------------------------

IMMUTABLE collection:
var list = listOf(1, 2, 3)
list += 4

-----------------------------------------------------------------

+Getters and setters (May 9, 2019, 4:08 a.m.)

If you are calling
var side: Int = square.a

it does not mean that you are accessing a directly. It is same as:
int side = square.getA();

in Java, cause Kotlin autogenerates default getters and setters.


In Kotlin, only if you have special setter or getter you should specify it. Otherwise, Kotlin autogenerates it for you.

+Null Operators ? !! (May 9, 2019, 3:36 a.m.)

What is the meaning of ? in savedInstanceState: Bundle? ?
It means that savedInstanceState parameter can be Bundle type or null. Kotlin is null safety language.


var a : String // you will get a compilation error, cause a must be initialized and it cannot be null.


That means you have to write:
var a : String = "Init value"



Also, you will get a compilation error if you do:
a = null


To make a nullable, you have to write:
var a : String?


Let’s say that we have nullable nameTextView. The following code will give us NPE if it is null:
nameTextView.setEnabled(true)


Kotlin will not allow us to even do such a thing. It will force us to use ? or !! operator.
If we use ? operator:
nameTextView?.setEnabled(true)

the line will be proceeded only if nameTextView is not a null. In another case, if we use !! operator:
nameTextView!!.setEnabled(true)

it will give us NPE if nameTextView is a null. It is just for adventurers.



lateinit modifier allows us to have non-null variables waiting for initialization.

Kotlin - Android
+Components of a RecyclerView (June 22, 2019, 2:56 p.m.)

1- LayoutManagers:

A RecyclerView needs to have a layout manager and an adapter to be instantiated. A layout manager positions item views inside a RecyclerView and determines when to reuse item views that are no longer visible to the user.

RecyclerView provides these built-in layout managers:
- LinearLayoutManager shows items in a vertical or horizontal scrolling list.
- GridLayoutManager shows items in a grid.
- StaggeredGridLayoutManager shows items in a staggered grid.

To create a custom layout manager, extend the RecyclerView.LayoutManager class.

------------------------------------------------------------------

2- RecyclerView.Adapter

RecyclerView includes a new kind of adapter. It’s a similar approach to the ones you already used, but with some peculiarities, such as a required ViewHolder. You will have to override two main methods: one to inflate the view and its view holder, and another one to bind data to the view. The good thing about this is that the first method is called only when we really need to create a new view. No need to check if it’s being recycled.

------------------------------------------------------------------

3- ItemAnimator

RecyclerView.ItemAnimator will animate ViewGroup modifications such as add/delete/select that are notified to the adapter. DefaultItemAnimator can be used for basic default animations and works quite well. See the section of this guide for more information.

------------------------------------------------------------------

+RecyclerView Compared to ListView (June 22, 2019, 2:48 p.m.)

RecyclerView differs from its predecessor ListView primarily:

- Required ViewHolder in Adapters - ListView adapters do not require the use of the ViewHolder pattern to improve performance. In contrast, implementing an adapter for RecyclerView requires the use of the ViewHolder pattern for which it uses RecyclerView.Viewholder.

- Customizable Item Layouts - ListView can only layout items in a vertical linear arrangement and this cannot be customized. In contrast, the RecyclerView has a RecyclerView.LayoutManager that allows any item layouts including horizontal lists or staggered grids.

- Easy Item Animations - ListView contains no special provisions through which one can animate the addition or deletion of items. In contrast, the RecyclerView has the RecyclerView.ItemAnimator class for handling item animations.

- Manual Data Source - ListView had adapters for different sources such as ArrayAdapter and CursorAdapter for arrays and database results respectively. In contrast, the RecyclerView.Adapter requires a custom implementation to supply the data to the adapter.

- Manual Item Decoration - ListView has the android:divider property for easy dividers between items in the list. In contrast, RecyclerView requires the use of a RecyclerView.ItemDecoration object to setup much more manual divider decorations.

- Manual Click Detection - ListView has a AdapterView.OnItemClickListener interface for binding to the click events for individual items in the list. In contrast, RecyclerView only has support for RecyclerView.OnItemTouchListener which manages individual touch events but has no built-in click handling.

+Difference between gravity and layout_gravity (June 12, 2019, 3:47 a.m.)

gravity:

- sets the gravity of the contents (i.e. its subviews) of the View it's used on.

- arranges the content inside the view.

--------------------------------------------------------------

layout_gravity:

- sets the gravity of the View or Layout relative to its parent.

- arranges the view's position outside of itself.

--------------------------------------------------------------

HTML/CSS Equivalents:

Android CSS
android:layout_gravity float
android:gravity text-align

+Retrofit (May 25, 2019, 10:53 a.m.)

1- Create an Interface:
that will contain various functions which will map to the endpoint URLs of your web service, such as:
getStudents()
deleteStudent()


2- Create a service that calls the functions present within the interface.
createService( <T> Service) -> studentsService


3- Last step, within your activity, you have to initialize the step-2 service and then call the functions of the interface in step-1.
destinationService.getDestination()

+Shared Preferences (May 14, 2019, 12:54 a.m.)

It allows activities and applications to keep preferences, in the form of key-value pairs similar to a Map that will persist even when the user closes the application.

Android stores Shared Preferences settings as XML file in shared_prefs folder under DATA/data/{application package} directory. The DATA folder can be obtained by calling Environment.getDataDirectory().

------------------------------------------------------------

SharedPreferences is application specific, i.e. the data is lost on performing one of the following options:
- on uninstalling the application
- on clearing the application data (through Settings)

------------------------------------------------------------

As the name suggests, the primary purpose is to store user-specified configuration details, such as user specific settings, keeping the user logged into the application.

------------------------------------------------------------

To get access to the preferences, we have three APIs to choose from:
- getPreferences() : used from within your Activity, to access activity-specific preferences

- getSharedPreferences() : used from within your Activity (or other application Context), to access application-level preferences

- getDefaultSharedPreferences() : used on the PreferenceManager, to get the shared preferences that work in concert with Android’s overall preference framework

------------------------------------------------------------

// Storing Data:
sharedPref = getSharedPreferences(getString(R.string.preference_file_key), MODE_PRIVATE)
with(sharedPref.edit()) {
putBoolean("intro_screen_displayed", true)
apply()
}



// Retrieving Data
var sharedPref = getSharedPreferences(getString(R.string.preference_file_key), MODE_PRIVATE)
if (sharedPref.getBoolean("intro_screen_displayed", false))
startActivity(mainActivity)

------------------------------------------------------------

editor.putBoolean("key_name", true); // Storing boolean - true/false
editor.putString("key_name", "string value"); // Storing string
editor.putInt("key_name", "int value"); // Storing integer
editor.putFloat("key_name", "float value"); // Storing float
editor.putLong("key_name", "long value"); // Storing long

pref.getString("key_name", null); // getting String
pref.getInt("key_name", -1); // getting Integer
pref.getFloat("key_name", null); // getting Float
pref.getLong("key_name", null); // getting Long
pref.getBoolean("key_name", null); // getting boolean

------------------------------------------------------------

// Clearing or Deleting Data:
remove(“key_name”) is used to delete that particular value.

clear() is used to remove all data

------------------------------------------------------------

+Repeat background image (May 11, 2019, 10:11 p.m.)

1- Copy the background image in drawable


2- Create a file in drawable "bg_pattern.xml" with this content:
<bitmap xmlns:android="http://schemas.android.com/apk/res/android"
android:src="@drawable/bg"
android:tileMode="repeat" />


3- Add the following attribute to the XML file for the specific view:
android:background="@drawable/bg_pattern"

+Get asset image by its string name (May 11, 2019, 4:17 p.m.)

import android.graphics.BitmapFactory
import android.graphics.Bitmap


var icon: Bitmap? = BitmapFactory.decodeStream(assets.open("intro_screen/img1.jpg"))
icon.setImageBitmap(icon)

+dimensions (May 10, 2019, 9:51 p.m.)

xxxhdpi: 1280x1920 px
xxhdpi: 960x1600 px
xhdpi: 640x960 px
hdpi: 480x800 px
mdpi: 320x480 px
ldpi: 240x320 px

+mipmap directories (May 10, 2019, 9:40 p.m.)

Like all other bitmap assets, you need to provide density-specific versions of you app icon. However, some app launchers display your app icon as much as 25% larger than what's called for by the device's density bucket.

For example, if a device's density bucket is xxhdpi and the largest app icon you provide is in drawable-xxhdpi, the launcher app scales up this icon, and that makes it appear less crisp. So you should provide an even higher density launcher icon in the mipmap-xxxhdpi directory. Now the launcher can use the xxxhdpi asset instead.

Because your app icon might be scaled up like this, you should put all your app icons in mipmap directories instead of drawable directories. Unlike the drawable directory, all mipmap directories are retained in the APK even if you build density-specific APKs. This allows launcher apps to pick the best resolution icon to display on the home screen.

+Configuration qualifiers for different pixel densities (May 10, 2019, 9:31 p.m.)

ldpi Resources for low-density (ldpi) screens (~120dpi).
mdpi Resources for medium-density (mdpi) screens (~160dpi). (This is the baseline density.)
hdpi Resources for high-density (hdpi) screens (~240dpi).
xhdpi Resources for extra-high-density (xhdpi) screens (~320dpi).
xxhdpi Resources for extra-extra-high-density (xxhdpi) screens (~480dpi).
xxxhdpi Resources for extra-extra-extra-high-density (xxxhdpi) uses (~640dpi).
nodpi Resources for all densities. These are density-independent resources. The system does not scale resources tagged with this qualifier, regardless of the current screen's density.
tvdpi Resources for screens somewhere between mdpi and hdpi; approximately 213dpi. This is not considered a "primary" density group. It is mostly intended for televisions and most apps shouldn't need it—providing mdpi and hdpi resources is sufficient for most apps and the system will scale them as appropriate. If you find it necessary to provide tvdpi resources, you should size them at a factor of 1.33*mdpi. For example, a 100px x 100px image for mdpi screens should be 133px x 133px for tvdpi.

+ConstraintLayout (March 24, 2019, 2:45 p.m.)

Constraints help us to describe what are relations of views.

-----------------------------------------------------------------

A constraint is a connection or an alignment to the element the constraint is tied to. You define various constraints for every child view relative to other views present. This gives you the ability to construct complex layouts with a flat view hierarchy.

A constraint is simply a relationship between two components within the layout that controls how the view will be positioned.

-----------------------------------------------------------------

The ConstraintLayout system has three parts: constraints, equations, and solver.

Constraints are relationships between your views and are determined when you set up your UI. Once you create these relationships, the system will translate them into a linear system of equations.

The equations go in the solver and it returns the positions, and view sizes to be used in the layout.

-----------------------------------------------------------------

The ConstraintLayout becomes very necessary most especially when building complex layouts. Android actually has other layouts, which have their own unique features. Some of which could be used to build complex layouts also. However, they have their own bottlenecks, hence the need to introduce a new layout.


These older layouts have rules that tend to be too rigid. As a result of this, the tendency to nest layouts become higher. For instance, the LinearLayout only permits placing views linearly, either horizontally or vertically. The FrameLayout places views in a stacked manner, the topmost view hides the rest. The RelativeLayout places the views relative to each other.

-----------------------------------------------------------------

When creating constraints, there are a few rules to follow:
Every view must have at least two constraints: one horizontal and one vertical. If a constraint for any axis is not added, your view jumps to the zero point of that axis.

You can create constraints only between a constraint handle and an anchor point that share the same plane. So a vertical plane (the left and right sides) of a view can be constrained only to another vertical plane, and baselines can constrain only to other baselines.

Each constraint handle can be used for just one constraint, but you can create multiple constraints (from different views) to the same anchor point.

-----------------------------------------------------------------

+Custom font (April 26, 2019, 10:46 p.m.)

https://medium.com/@studymongolian/using-a-custom-font-in-your-android-app-cc4344b977a5

+Creating actions in the action bar / toolbar (April 26, 2019, 12:40 a.m.)

https://developer.android.com/training/appbar/actions
--------------------------------------------------------------------

Buttons in the toolbar are typically called actions.

Space in the app bar is limited. If an app declares more actions than can fit in the app bar, the app bar sends the excess actions to an overflow menu.

The app can also specify that an action should always be shown in the overflow menu, instead of being displayed on the app bar.

--------------------------------------------------------------------

Add Action Buttons:

All action buttons and other items available in the action overflow are defined in an XML menu resource.


To add actions to the action bar, create a new XML file in your project's res/menu/ directory as follows:
1- In Android Studio, in project view, select "Project", right click on "res" folder and choose the menu "New" -> "Android Resource File".


2- In the window for "file name" set for example "main_toolbar" and for "Resource type" choose "menu", hit OK button.


3- Add an <item> element for each item you want to include in the action bar, as shown in this code example of a menu XML file:

<menu xmlns:android="http://schemas.android.com/apk/res/android" >

<item
android:id="@+id/action_favorite"
android:icon="@drawable/ic_favorite_black_48dp"
android:title="@string/action_favorite"
app:showAsAction="ifRoom"/>

<!-- Settings, should always be in the overflow -->
<item android:id="@+id/action_settings"
android:title="@string/action_settings"
app:showAsAction="never"/>

</menu>


4- Add the following code to MainActivity.kt
override fun onCreateOptionsMenu(menu: Menu): Boolean {
menuInflater.inflate(R.menu.main_toolbar, menu)
return true
}

// This is to only display where the above code should be placed.
override fun onCreate(savedInstanceState: Bundle?) { }

+Set up the app bar (Toolbar) (April 26, 2019, 12:20 a.m.)

https://developer.android.com/training/appbar/setting-up#kotlin
-------------------------------------------------------------------------------

1- Replace android:theme="@style/AppTheme" with android:theme="@style/Theme.AppCompat.Light.NoActionBar" in AndroidManifest.xml

2- Add a Toolbar to the activity's layout (activity_main.xml)
<android.support.v7.widget.Toolbar
android:id="@+id/my_toolbar"
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
android:background="?attr/colorPrimary"
android:elevation="4dp"
android:theme="@style/ThemeOverlay.AppCompat.ActionBar"
app:popupTheme="@style/ThemeOverlay.AppCompat.Light"/>

It might display an error about "This view is not constrained vertically...", for fixing the error:
Go to Design View, use the magic wand icon in the toolbar menu above the design preview. This will automatically add some lines in the text field and the red line will be removed.

You can also set the background color to transparent:
android:background="@android:color/transparent";

3- Add the 3rd line to MainActivity.kt
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
setSupportActionBar(findViewById(R.id.my_toolbar))

+Views (April 25, 2019, 1:52 p.m.)

A view is basically any of the widgets that make up a typical utility app.

Examples include images (ImageViews), text (TextView), editable text boxes (EditText), web pages (WebViews), and buttons (err, Button).

+XML - Introduction (April 25, 2019, 1:44 p.m.)

XML describes the views in your activities, and Kotlin tells them how to behave.


Sometimes XML will be used to describe types of data other than views in your apps; acting as a kind of index that your code can refer to. This is how most apps will define their color palettes for instance, meaning that there’s just one file you need to edit if you want to change the look of your entire app.

Kubernetes
+Installation (July 25, 2020, 12:17 p.m.)

1- Disable SWAP memory:
swapoff -a


2- Install Docker using my notes.


3- Install Kubernetes:
apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt update

apt install -y kubelet kubeadm kubectl

apt-mark hold kubelet kubeadm kubectl


4- Initialize Kubernetes on Master Node:
kubeadm init --pod-network-cidr=10.244.0.0/16


5- Create a Directory for the Kubernetes Cluster:
Make kubectl work for your non-root user.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


6- Pod Network Add-On (Flannel):
Install a pod network add-on so that your pods can communicate effectively.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


7-

+kubeadm, kubelet and kubectl (July 25, 2020, 12:28 p.m.)

- kubeadm: The command to bootstrap the cluster.

- kubelet: The component that runs on all of the machines in your cluster and does things like starting pods and containers.

- kubectl: The command line util to talk to your cluster.

----------------------------------------------------------------------

kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.

----------------------------------------------------------------------

+Stateful and Stateless Application / StateflSet (July 25, 2020, 10:49 a.m.)

What is a Stateful Application?

Examples of stateful applications are old databases like (MySQL, Elasticsearch, MySQL, MongoDB, etc) or any application that stores data to keep track of its state. In other words, these are applications that track the state by saving that information in some storage.

Stateless applications on the other hand do not keep records of previous interaction in each request or interaction is handled as a completely new isolated interaction entirely based on the information that comes with it.

Stateless applications sometimes connect to the stateful applications to forward those requests.

------------------------------------------------------------------------

What is a StatefulSet?

It's a Kubernetes component that is used specifically for stateful applications.

------------------------------------------------------------------------

Stateless applications are deployed using the Deployment component. Deployment is an abstraction of Pods and allows you to replicate that application, meaning run two, five, ten identical Pods of the same stateless application in the cluster.

------------------------------------------------------------------------

While stateless applications are deployed using Deployment, stateful applications in the Kubernetes are deployed using StatefulSet component.
Just like Deployment, StatefulSet makes it possible to replicate the Stateful app, Pods, or to run multiple replicas of it.

------------------------------------------------------------------------

They both manage Pods that are based on an identical container specification and you can also configure storage with both of them equally, in the same way. If both manage the replication of Pods and also the configuration of data persistence in the same way, the question is what is the difference between those two components? Why we use different ones for each type of application? The differences are listed below.

------------------------------------------------------------------------

The differences between Deployment and StatefulSet:

- Replicating stateful applications is more difficult and has a couple of requirements that stateless applications do not have.

Example:
Let's say we have a MySQL database Pod that handles requests from a Java application, which is deployed using a Deployment component, and let's say we scaled the Java application to 3 Pods so they can handle more client requests. In parallel, we want to scale the MySQL app so we can handle more Java requests as well. Scaling our Java application here is pretty straight-forward. Java applications replicate Pods will be identical and interchangeable, so we can scale it using the deployment pretty easily. The Deployment will create the Pods in any order in any random order, they will get random hashes at the end of the Pod name they will get one service that load balances to any one of the replicate Pods for any request and also when you delete them they get deleted in random order or at the same time. When you scale them down from 3 to 2 replicas, for example, one random replica Pod gets chosen to be deleted. So, no complications there!

On the other hand, MySQL Pod replicas can not be created and deleted at the same time in any order and they can't be randomly addressed. The reason for that is because the replica pods are not identical. In fact, they each have their own additional identity on top of the common blueprint of the Pod they get created from, and giving each Pod its own required individual identity is actually what StatefulSet does differently from Deployment. It maintains a sticky identity for each of its Pods and as said earlier, these Pods are created from the same specification but they're not interchangeable. Each has a persistence identifier that maintains across any re-scheduling, meaning, when a Pod dies it gets replaced by new Pod, it keeps that identity.

------------------------------------------------------------------------

+Architecture (July 21, 2020, 12:46 p.m.)

A basic setup of one node with two application pods running on it:

One of the main components of Kubernetes architecture is its worker servers or nodes. Each node will have multiple application pods with containers running on that node. The way Kubernetes does it is by using three processes that must be installed on every node that are used to schedule and manage those pods. So nodes are the cluster servers that do the work. That's why sometimes also called worker nodes. So the first process that needs to run on every node is the container runtime, Docker. So because application pods have containers running inside, a container runtime needs to be installed on every node, but the process that schedules those pods in the containers underneath is Kubelet, which is a process of Kubernetes itself, unlike container runtime that has an interface with both container runtime and the machine, the node itself. Because at the end of the day Kubelet is responsible for taking that configuration and running/starting a pod with a container inside and assigning resources from the node to the container like CPU, RAM, and storage resources.

So usually Kubernetes cluster is made of multiple nodes which also must have container runtime and Kubelet services installed. You can have hundreds of those worker nodes which run other pods and containers and replicas of the existing pods. The way the communication between them works is using Services, which is sort of a load balancer that catches the request, directs it to the pod or application, like a database for example, and then forwards it to the respective pod.

The third process that is responsible for forwarding requests from services to pods is a Kube Proxy that also must be installed on every node. Kube Proxy has intelligent forwarding logic inside, which makes sure the communication also works in a performant way with low overhead. For example, if an application is making a request to the database, instead of just randomly forwarding the request to any replica, it will actually forward it to the replica that is running on the same node as the pod that initiated the request. Thus is way causes avoiding the network overhead of sending the request to another machine.

So, to summarize, two Kubernetes processes, Kubelet and Kube Proxy must be installed on every Kubernetes worker node along with an independent container runtime, in order for Kubernetes cluster to function properly.

+Namespace - Create component in a Namespace (July 21, 2020, 12:35 p.m.)

kubectl apply -f mysql-configmap.yml --namespace=my-namespace

-----------------------------------------------------------------------------

another way is inside the configuration file itself:

metadata:
namespace: my-namespace

-----------------------------------------------------------------------------

kubectl get configmap -n my-namespace

-----------------------------------------------------------------------------

+Namespace - Introduction (July 21, 2020, 10:46 a.m.)

Usages of namespaces:

1- To group resources into namespaces:
For example, you can have a database namespace where you deploy your database and all its required resources.
You can have a monitoring namespace where you deploy the parameters and all the stuff it needs.
You can also have Elastic Stack namespace where all the Elastic Search, Kibaban, and etc resources go together.
You can have Nginx-Ingress resources.


2- When you have multiple teams:
Imagine the scenario you have two teams that use the same cluster. One team deploys an application which is called "my-app deployment", which has some certain configuration. Now if another team had a deployment that accidentally had the same name as "my-app deployment" but with different configurations, they would override the first team's deployment. To avoid such kind of conflicts again you can use namespaces so that each team can work in their own namespace without disrupting the other.


3-1 Resource sharing: Staging and Development:
Let's say you have one cluster and you want to host both the Staging and Development environments in the same cluster. The reason for that is for example if you're using something like Nginx-Ingress Controller or Elastic Stack used for logging for example. You can deploy in one cluster and use it for both environments. In that way, you don't have to deploy these common resources twice in two different clusters. So now the staging can use both resources as well as the development environment.


3-2 Resource Sharing: Blue/Green Deployment:
It means that in the same cluster you want to have two different versions of the production. One is the active and in the production now, and another one that is going to be the next production version. The versions of the applications in those Blue and Green production namespaces will be different, however the same as we saw before in the Staging and Development, these namespaces might need to use the same resources like again the Nginx-Ingress Controller or Elastic Stack. In this way again they can both use these common shared resources without having to set-up a separate cluster.


4- Access and Resource Limits of Namespaces:
Again we have a scenario where we have two teams working in the same cluster and each one of them has their own namespace. So what you can do in this scenario is that you can give the teams access to only their namespace so they can only be able to create/update/delete resources in their own namespace but they can't do anything in the other namespaces. This way you even restrict or minimize the risk of one team accidentally interfering with another team's work. So each one has its own secured isolated environment. An additional thing that you can do on a namespace level is to limit the resources (CPU, RAM, etc) that each namespace consumes.

-------------------------------------------------------------------------

In a Kubernetes cluster, you can organize resources in namespaces, so you can have multiple namespaces in a cluster. You can think of a namespace as a virtual cluster inside a Kubernetes cluster and when you create a cluster, by default, Kubernetes gives you namespaces out of the box.

$ kubectl get namespace
This command lists the out of the box that Kubernetes offers.

- The "kubernetes-dashboard" namespace is shipped automatically in minikube. It's specific to the minikube installation. You will not have this in the standard cluster.


- The "kube-system" namespace is not meant for your use. So basically you shouldn't create or modify anything in the kube-system namespace. The components that are deployed in the namespace are:
-- System processes
-- Master and Kubectl processes


- The "kube-public" namespace contains publicly accessible data. It has a config map that contains cluster information that is accessible even without authentication.


- The "kube-node-lease" namespace holds information about the heartbeats of nodes. Each node basically gets its own object that contains the information about that node's availability.


- The "default" namespace is the one that you're gonna be using to create the resources at the beginning if you haven't created a new namespace.

-------------------------------------------------------------------------

You can create new namespaces:

kubectl create namespace my-namespace

kubectl get namespace

-------------------------------------------------------------------------

+Helm (July 21, 2020, 9:51 a.m.)

Helm has a couple of features that are useful:

- Package Manager for Kubernetes (To package YAML files and distribute them in public and private repositories)

- Templating Engine

- Same applications across different environment

- Release management

+Basic Concepts (July 18, 2020, 4:27 p.m.)

Pod:

A pod is the smallest unit that you as a Kubernetes user will configure and interact with.

A pod is basically a wrapper of a container.

On each worker node, you're gonna have multiple pods and inside of a pod, you can have multiple containers.

Usually, per application, you would have one pod, so the only time you would need more than one container inside of a pod is when you have a main application that needs some helper containers. So, usually, you would have one pod per application.

A database for example would be one pod, a message broker will be another pod, a server again will be another pod, and your nodeJS application or Java application will have its own pod.

Each Pod is its own self-containing server with its own IP address and the way that they can communicate with each other is we're using that internal IP addresses.

We don't configure or create containers inside of the Kubernetes cluster but we only work with the Pods which is an abstraction layer over containers. A pod is a component of Kubernetes that manages the containers running inside itself without our intervention. For example, if a container stops or dies inside of a Pod it will be automatically restarted.
However, Pods are ephemeral components which means that Pods can also die very frequently and when a Pod dies a new one gets created, and here is where the notion of service comes in to play. So what happens is whenever a Pod gets restarted or recreated, a new Pod is created and it gets a new IP address. So, for example, if you have your application talking to a database Pod using the IP address the Pod has and the Pod restarts, it gets a new IP address, obviously, you'd be very inconvenient adjust the IP address all the time. So, because of that, another component of Kubernetes called Service is used which basically is an alternative or substitute to those IP addresses. So, instead of having these dynamic IP addresses, the services sitting in front of each Pod that talk to each other. So, now if a Pod behind the service dies and gets recreated the service stays in place because their life-cycles are not tied to each other.

The Service has two main functionality:
1- An IP address which is a permanent IP address which you can communicate between Pods
2- Load balancer

+ Introduction (July 18, 2020, 2:59 p.m.)

Kubernetes is an open-source platform for managing container technologies such as Docker.

Docker lets you create containers for a pre-configured image and application. Kubernetes provides the next step, allowing you to balance loads between containers and run multiple containers across multiple systems.

The simplest description of a Kubernetes cluster would be a set of managed nodes that run applications in containers.

-----------------------------------------------------------------------------

Kubernetes is an open-source container orchestration tool that was originally developed by Google.

On the foundation, it manages containers: Docker containers or from some other technologies.

Kubernetes helps you manage containerized applications that are made up of hundreds or thousands of containers and helps you manage them in different environments, like physical machines, virtual machines or cloud environments, or even hybrid development environments.

-----------------------------------------------------------------------------

What problems does Kubernetes solve?
What are the tasks of an orchestration tool?

- The trend from monolith to Microservices.
- Increased usages of containers
- Demand for a proper way of managing those hundreds of containers.

-----------------------------------------------------------------------------

What features do orchestration tools offer?

- High Availability or no downtime
- Scalability or high performance
- Disaster recovery - backup and restore

-----------------------------------------------------------------------------

How does the basic Kubernetes architecture look like?

The Kubernetes cluster is made up of at least one Master node and then connected to it you have a couple of worker nodes.
Each node has a Kublete process running on it.

Kublete is a Kubernetes process that makes it possible for the cluster to talk to/communicate with each other and execute some tasks on those nodes, like running application processes.

Each worker node has Docker containers of different applications deployed on it. So depending on how the workload is distributed you would have a different number of Docker containers running on worker nodes.

-----------------------------------------------------------------------------

What is running on the Master node?
Master node actually runs several Kubernetes processes that are absolutely necessary to run and manage the cluster properly.

- API Server: One of the processes is an API Server which also is a container. An API Server is actually the entry point to the Kubernetes cluster. This is the process that different Kubernetes clients talk to, like UI, API, CLI.

- Controller Manager:
Keeps an overview of what's happening in the cluster, whether something needs to be repaired, or maybe if a container died and it needs to be restarted, etc.

- Scheduler:
The scheduler is basically responsible for scheduling containers on different nodes based on workload and the available server resources on each node. So it's an intelligent process that decides on each worker node the next container should be scheduled on, based on the available resources on those worker nodes and the load that the container needs.

- ETCD key-value storage
It holds the current state of the Kubernetes cluster at any time. It has all the configuration data inside and all the status data of each node and each container inside of that node. The backup and restore process is actually made from this ETCD snapshots.

- Virtual Network
Enables worker nodes and master nodes talk to each other. It turns all the nodes inside of the cluster into one powerful machine that has some of all the resources of individual nodes.

-----------------------------------------------------------------------------

Worker nodes actually have most load because they're running applications inside of it, they're usually much bigger and have more resources because that will be running hundreds of containers inside of them.

The master node will be running just a handful of master processes, so it doesn't need that many resources. However, as you can image, a master node is much more important than the individual worker nodes because if for example, if you lose a master node access, you will not be able to access the cluster anymore, and that means you absolutely have to have a backup of your master at any time, so in production environments usually, you would to at least have two masters inside your Kubernetes cluster.

But in more cases, of course, you're gonna have multiple masters, where if one master node is down the cluster continues to function smoothly because other masters are available.

-----------------------------------------------------------------------------

Linux
+Run commands without sudo (Sept. 21, 2020, 3:20 p.m.)

sudo gpasswd -a $USER docker

+apt vs apt-get (July 25, 2020, 11:07 a.m.)

apt-get may be considered as lower-level and "back-end", and support other APT-based tools. apt is designed for end-users (human) and its output may be changed between versions.

--------------------------------------------------------------------------

Install package:

apt-get install
apt install

--------------------------------------------------------------------------

Remove package:

apt-get remove
apt remove

--------------------------------------------------------------------------

Update all package:

apt-get upgrade
apt upgrade

--------------------------------------------------------------------------

Update all package:

apt-get upgrade
apt upgrade

--------------------------------------------------------------------------

Update all packages (auto handling of dependencies):

apt-get dist-upgrade
apt full-upgrade

--------------------------------------------------------------------------

Search packages:

apt-cache search
apt search

--------------------------------------------------------------------------

Show package information:

apt-cache show
apt show

--------------------------------------------------------------------------

Remove unwanted dependencies:

apt-get autoremove
apt autoremove

--------------------------------------------------------------------------

Removes package with associated configuration:

apt-get purge
apt purge

--------------------------------------------------------------------------

Two new commands introduced with the apt:

apt list:
When apt list command is used with --installed or --upgradeable, it lists the packages that are installed, available to install, or those that need to be upgraded.

apt edit-sources:
When this command is used, it opens the "sources.list" file in an editor for editing.

--------------------------------------------------------------------------

DIFFERENCES TO APT-GET(8)
The apt command is meant to be pleasant for end-users and does not need to be backward compatible like apt-get(8). Therefore some options are different:
· The option DPkg::Progress-Fancy is enabled.
· The option APT::Color is enabled.
· A new list command is available similar to dpkg --list.
· The option upgrade has --with-new-pkgs enabled by default.

--------------------------------------------------------------------------

+HTTP Proxy To Socks (July 15, 2020, 11:44 a.m.)

1- Install NodeJS and NPM using my notes.


2- Install http-proxy-to-socks
npm install -g http-proxy-to-socks


3- Usage:
hpts -s 127.0.0.1:1080 -p 8080

+Curl (June 29, 2020, 8:02 p.m.)

HTTP GET request

curl https://www.mohsenhassani.com

----------------------------------------------------------------

Adding an additional HTTP request header:

curl -H "X-Header: value" https://www.mohsenhassani.com

----------------------------------------------------------------

Storing HTTP headers

With the -D option, you have the ability to store the HTTP headers that a site sends back. This is useful for instance if you want to read the cookies from the headers by using a second curl command and including the -b option. The - after the -D tells curl that the output file is stdout (the file into which kernel writes its output).

curl -D - https://www.mohsenhassani.com/

----------------------------------------------------------------

Downloading a file (Saving the result of a curl command):

curl -O http://yourdomain.com/yourfile.tar.gz # Save as yourfile.tar.gz
curl -o newfile.tar.gz http://yourdomain.com/yourfile.tar.gz # Save as newfile.tar.gz

----------------------------------------------------------------

Resume an interrupted Download:

curl -C - -O http://yourdomain.com/yourfile.tar.gz

----------------------------------------------------------------

Download Multiple Files:

curl -O http://yoursite.com/info.html -O http://mysite.com/about.html

----------------------------------------------------------------

Download URLs From a File:

xargs -n 1 curl -O < listurls.txt

----------------------------------------------------------------

Use a Proxy with or without Authentication:

curl -x proxy.yourdomain.com:8080 -U user:password -O http://yourdomain.com/yourfile.tar.gz

where you can skip -U user:password if your proxy does not require authentication.

----------------------------------------------------------------

Query HTTP Headers:

curl -I mohsenhassani.com

HTTP headers allow the remote web server to send additional information about itself along with the actual request. This provides the client with details on how the request is being handled.

----------------------------------------------------------------

Make a POST request with Parameters:

The following command will send the firstName and lastName parameters, along with their corresponding values, to https://yourdomain.com/info.php.

curl --data "firstName=John&lastName=Doe" https://yourdomain.com/info.php

----------------------------------------------------------------

Download Files from an FTP Server with or without Authentication:

curl -u username:password -O ftp://yourftpserver/yourfile.tar.gz

where you can skip -u username:password if the FTP server allows anonymous logins.

----------------------------------------------------------------

Upload Files to an FTP server with or without Authentication:

curl -u username:password -T mylocalfile.tar.gz ftp://yourftpserver

----------------------------------------------------------------

Specify User Agent:

The user agent is part of the information that is sent along with an HTTP request. This indicates which browser the client used to make the request.

curl -I http://localhost --user-agent "I am a new web browser"

OR

curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" https://getfedora.org/

----------------------------------------------------------------

Store Website Cookies:

Want to see which cookies are downloaded to your computer when you browse to https://www.cnn.com? Use the following command to save them to cnncookies.txt.

curl --cookie-jar cnncookies.txt https://www.cnn.com/index.html -O

----------------------------------------------------------------

Send Website Cookies:

curl --cookie cnncookies.txt https://www.cnn.com

----------------------------------------------------------------

Modify Name Resolution:

If you’re a web developer and want to test a local version of yourdomain.com before pushing it live, you can make curl resolve http://www.yourdomain.com to your localhost like so:

curl --resolve www.yourdomain.com:80:localhost http://www.yourdomain.com/

Thus, the query to http://www.yourdomain.com will tell curl to request the site from localhost instead of using DNS or the /etc/hosts file.

----------------------------------------------------------------

Limit Download Rate:

To prevent curl from hosing your bandwidth, you can limit the download rate to 100 KB/s as follows.

curl --limit-rate 100K http://yourdomain.com/yourfile.tar.gz -O

----------------------------------------------------------------

Follow Redirects:

By default, curl doesn’t follow the HTTP Location headers.

If you try to retrieve the non-www version of google.com, you will notice that instead of getting the source of the page you’ll be redirected to the www version:

curl google.com


The -L option instructs curl to follow any redirect until it reaches the final destination:

curl -L google.com

----------------------------------------------------------------

Pass HTTP Referer:

curl --referer http://example.com/bot.html http://www.cyberciti.biz/

curl --referer fooBar www.cyberciti.biz

----------------------------------------------------------------

GET with JSON:

curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource

----------------------------------------------------------------

POST:

For posting data:
curl --data "param1=value1&param2=value2" http://hostname/resource


For file upload:
curl --form "fileupload=@filename.txt" http://hostname/resource


RESTful HTTP Post:
curl -X POST -d @filename http://hostname/resource

----------------------------------------------------------------

+Password Protect GRUB Bootloader (June 26, 2020, 7:57 p.m.)

1- Issue the following command. You will be prompted to create and verify a password for GRUB
grub-mkpasswd-pbkdf2


2- Once that completes, the command will generate a hashed password. We need to add the new hash to 00_header file. Issue the command:
vim /etc/grub.d/00_header

At the bottom of that file, paste the following:
cat << EOF
set superusers="admin"
password_pbkdf2 admin HASH
EOF

where HASH is the hash generated earlier.
Save and close that file.


3- Update GRUB with the command:
update-grub2

+Shebang (June 22, 2020, 2:32 p.m.)

The shebang is a special character sequence in a script file that specifies which program should be called to run the script. The shebang is always on the first line of the file, and is composed of the characters #! followed by the path to the interpreter program. You can also specify command line options, if necessary.

For example, the following line contains the shebang, the path to the Perl interpreter, and the command line option -w:

#!/usr/bin/perl -w

#!/usr/bin/python

#!/usr/bin/python3

#!/usr/bin/env bash

+scp - Escape spaces (Feb. 20, 2019, 4:28 p.m.)

scp " mhass . ir :/home/mohsen/programs/\[Radio\ Streams\].m3u" Temp/

OR

scp -T mhass . ir :/home/mohsen/programs/"\"[Radio Streams].m3u\"" Temp/

+Delete old kernels (Jan. 12, 2019, 4:27 p.m.)

1- List all installed kernels
dpkg -l | grep linux-image | awk '{print$2}'


2- Remove a particular linux-image along with its configuration files:
apt remove --purge linux-image-5.0.0-38-generic


3- Update grub2 configuration:
update-grub2

+Environmental and Shell Variables (Nov. 17, 2018, 4:25 p.m.)

Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.

Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.

---------------------------------------------------------------

Printing Shell and Environmental Variables:

env
OR
printenv


set | less
OR
(set -o posix; set)

---------------------------------------------------------------

Difference between "env" and "printenv":

The difference between the two commands is only apparent in their more specific functionality. For instance, with printenv, you can requests the values of individual variables:

printenv SHELL

/bin/bash

On the other hand, env let's you modify the environment that programs run in by passing a set of variable definitions into a command like this:

env VAR1="blahblah" command_to_run command_options

---------------------------------------------------------------

Set command:

Returns a list of all shell variables, environmental variables, local variables, and shell functions:

---------------------------------------------------------------

Setting Shell and Environmental Variables:

Specify a name and a value:
TEST_VAR='Hello World!'

It is a shell variable. This variable is available in our current session, but will not be passed down to child processes.

Test it:
set | grep TEST_VAR

We can verify that this is not an environmental variable by trying the same thing with printenv:
printenv | grep TEST_VAR
No output should be returned.

echo $TEST_VAR

---------------------------------------------------------------

Creating Environmental Variables:

export TEST_VAR

printenv | grep TEST_VAR

---------------------------------------------------------------

Change it back environmental variable into a shell variable by typing:

export -n TEST_VAR
It is no longer an environmental variable:
printenv | grep TEST_VAR

However, it is still a shell variable:
set | grep TEST_VAR

If we want to completely unset a variable, either shell or environmental, we can do so with the unset command:
unset TEST_VAR

We can verify that it is no longer set:
echo $TEST_VAR

---------------------------------------------------------------

+Remote Desktop Clients (Oct. 14, 2018, 4:22 p.m.)

VNC -> Virtual Network Computing

RDP -> Remote Desktop Protocol

----------------------------------------------------------------

RDP is used to connect to Windows-based computers

VNC is used to connect to Linux machines

----------------------------------------------------------------

+Open URL in command line (Oct. 29, 2018, 3:55 p.m.)

https://www.8bitavenue.com/2018/02/how-to-open-url-in-linux-by-command-line/

------------------------------------------------------------

Linux:

xdg-open https://www.8bitavenue.com

------------------------------------------------------------

Unix:


wget https://www.8bitavenue.com
Wget retrieves the HTML file unlike the browser; it only downloads the file without rendering.


curl https://www.8bitavenue.com
-------------------------------------------------------------

+Bropages (June 7, 2020, 4:12 p.m.)

The slogan of this utility is "just get to the point".

1- apt install build-essential ruby-dev

2- gem install bropages

-----------------------------------------------------------

Fedora:

dnf -y install gcc-c++ ruby-devel

-----------------------------------------------------------

Usage:

bro find

This will display a big list/help of usages of "find" command.

-----------------------------------------------------------

+Firefox - Fixing PDFs opening in GIMP (Sept. 28, 2018, 4 p.m.)

Open the file:
vim /usr/share/applications/mimeinfo.cache

Change the line:
application/pdf=gimp.desktop;okularApplication_pdf.desktop;
To:
application/pdf=okularApplication_pdf.desktop;

+inxi - System Information Tool (July 17, 2018, 3:57 p.m.)

inxi:

will produce output to do with system CPU, kernel, uptime, memory size, hard disk size, number of processes, the client used, and inxi version.

----------------------------------------------------------

Show Linux Kernel and Distribution Info:
inxi -S


Monitor Linux CPU Temperature and Fan Speed:
inxi -s

----------------------------------------------------------

Find Linux Laptop or PC Model Information:
inxi -M

----------------------------------------------------------

Find Linux CPU and CPU Speed Information
inxi -C

----------------------------------------------------------

Show advanced network card information including interface, speed, mac id, state, IPs, etc:
inxi -Nni

----------------------------------------------------------

View a distro repository data:
inxi -r

----------------------------------------------------------

View whether info:
inxi -w
inxi -w Tehran,Iran

----------------------------------------------------------

Top 10 most active processes eating up CPU and memory:
inxi -t cm10

----------------------------------------------------------

Linux Hard Disk Partition Details
inxi -p

----------------------------------------------------------

Full Linux System Information:
inxi -F

----------------------------------------------------------

Linux Processes Memory Usage:
inxi -I

----------------------------------------------------------

Audio/Sound Card Information:
inxi -A

----------------------------------------------------------

+List all enabled services from systemctl (June 25, 2018, 3:57 p.m.)

systemctl list-unit-files --state=enabled

+LZMA (May 16, 2018, 3:56 p.m.)

Note that lzma and xz both use the same compression algorithm, in fact, lzma is deprecated in favor of the newer xz. So you would be better off using xz (tar -J):

tar -cpJf backboxhome.tar.xz /home/user


Lzma stands for Lempel-Ziv-Markov chain Algorithm. Lzma is a compression tool like bzip2 and gzip to compress and decompress files. It tends to be significantly faster and efficient than bzip compression. As we know, gzip compression ratio is worse than bzip2 (and lzma).

lzma -c9 --stdout debian-9.qcow2 > debian.lzma

+Get the absolute directory of a file in bash (May 15, 2018, 3:54 p.m.)

readlink -f <file_name>

--------------------------------------------------------

realpath <file_name>

--------------------------------------------------------

+Sort (May 13, 2018, 3:53 p.m.)

-b, --ignore-leading-blanks Ignore leading blanks

-d, --dictionary-order Consider only blanks and alphanumeric characters.

-f, --ignore-case Fold lower case to upper case characters.

-g, --general-numeric-sort Compare according to general numerical value.

-i, --ignore-nonprinting Consider only printable characters.

-M, --month-sort Compare (unknown) < `JAN' < ... < `DEC'.

-h, --human-numeric-sort Compare human-readable numbers (e.g., "2K", "1G").

-n, --numeric-sort Compare according to string numerical value.

-R, --random-sort Sort by random hash of keys.

--random-source=FILE Get random bytes from FILE.

-r, --reverse Reverse the result of comparisons.

--sort=WORD Sort according to WORD: general-numeric -g, human-numeric -h, month -M, numeric -n, random -R, version -V.

-V, --version-sort Natural sort of (version) numbers within text.

-c, --check, --check=diagnose-first Check for sorted input; do not sort.

-C, --check=quiet, --check=silent Like -c, but do not report first bad line.

-k, --key=POS1[,POS2] Start a key at POS1 (origin 1), end it at POS2 (default end of line). See POS syntax below.

-o, --output=FILE Write result to FILE instead of standard output.

-t, --field-separator=SEP Use SEP instead of non-blank to blank transition.

-z, --zero-terminated End lines with 0 byte, not newline.

-------------------------------------------------------

sort -k2 test.txt

Sort according to the characters starting at the second column. k2 refers to the second column.

-------------------------------------------------------

The -r option reverses the sorting

sort -k2 -r test.txt

-------------------------------------------------------

Sorting a Stream Output:

ls -al | sort -r -n -k5

The -n operator specifies numeric sorting rather than alphabetic.

-------------------------------------------------------

sort -k 2n

sort -nk2 lsl.txt

-------------------------------------------------------

The -k m,n option lets you sort on a key that is potentially composed of multiple fields (start at column m, end at column n):

sort -k2n,2 -k1,1 quota

sort -k 3.3,3.5 data.txt

-------------------------------------------------------

join <(sort file1.txt) <(sort file2.txt)

-------------------------------------------------------

ls -l /home/$USER | sort -t "," -nk2,5 -k9

-------------------------------------------------------

sort -u lsl.txt lsla.txt

-------------------------------------------------------

+Find process ID (PID) (May 1, 2018, 3:53 p.m.)

pgrep ^firefox

+ssh-keygen with name (April 28, 2018, 3:52 p.m.)

ssh-keygen -t rsa -f my_backups

+Mount mdf image (April 24, 2018, 3:51 p.m.)

1- apt install mdf2iso

2- mount -o loop -t iso9660 file.mdf /mnt/mdf

--------------------------------------------------------

You can also try the software acetoneiso which is basically some sort of Daemon Tools / Alcohol 120% for Linux.

--------------------------------------------------------

+gzip, gunzip, zcat (May 21, 2018, 3:50 p.m.)

gzip reduces the size of the named files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension .gz, while keeping the same ownership modes, access, and modification times. (The default extension is -gz for VMS, z for MSDOS, OS/2 FAT, Windows NT FAT and Atari.) If no files are specified, or if a file name is "-", the standard input is compressed to the standard output. gzip will only attempt to compress regular files. In particular, it will ignore symbolic links.


If the compressed file name is too long for its file system, gzip truncates it. gzip attempts to truncate only the parts of the file name longer than 3 characters. (A part is delimited by dots.) If the name consists of small parts only, the longest parts are truncated. For example, if file names are limited to 14 characters, gzip.msdos.exe is compressed to gzi.msd.exe.gz. Names are not truncated on systems which do not have a limit on file name length.

By default, gzip keeps the original file name and timestamp in the compressed file. These are used when decompressing the file with the -N option. This is useful when the compressed file name was truncated or when the time stamp was not preserved after a file transfer.

Compressed files can be restored to their original form using gzip -d or gunzip or zcat. If the original name saved in the compressed file is not suitable for its file system, a new name is constructed from the original one to make it legal.

------------------------------------------------------------

gunzip takes a list of files on its command line and replaces each file whose name ends with .gz, -gz, .z, -z, or _z (ignoring case) and which begins with the correct magic number with an uncompressed file without the original extension. gunzip also recognizes the special extensions .tgz and .taz as shorthands for .tar.gz and .tar.Z respectively. When compressing, gzip uses the .tgz extension if necessary instead of truncating a file with a .tar extension.

gunzip can currently decompress files created by gzip, zip, compress, compress -H or pack. The detection of the input format is automatic.

------------------------------------------------------------

zcat is identical to gunzip -c. (On some systems, zcat may be installed as gzcat to preserve the original link to compress.) zcat uncompresses either a list of files on the command line or its standard input and writes the uncompressed data on standard output. zcat will uncompress files that have the correct magic number whether they have a .gz suffix or not.

------------------------------------------------------------

+Join Several Partitions Together (May 18, 2018, 3:45 p.m.)

How To Join Several Partition Together To Form a Single Larger One:

---------------------------------------------------------------------

1- apt install mhddfs

2- Create a new mount point directory:
mkdir /mnt/virtual.backup

3- Using `mount` command find the `mount point` of each disk you intend to aggregate.
mhddfs /mnt/backup1,/mnt/backup2,/mnt/backup3 /mnt/virtual.backup -o allow_other
The /mnt/backup{1..3} are the mount points.

4- That's it! Verify the virtual directory using "df -h".
Now, update the /etc/fstab file:
mhddfs#/mnt/backup1;/mnt/backup2;/mnt/backup3 /mnt/virtual.backup fuse defaults,allow_other 0 0

---------------------------------------------------------------------

For unmounting:
umount /mnt/virtual.backup

---------------------------------------------------------------------

+Crontab - Remove old backup files (June 7, 2020, 11:15 a.m.)

find /var/mohsen_backups/ -type f -name "*.tar.gz" ! -newermt "`date --date='-8 days'`" -exec rm {} +

+Install WINE on Kubuntu (May 6, 2020, 11:42 a.m.)

dpkg --add-architecture i386

apt-get -y install software-properties-common wget

wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add -

apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main'

add-apt-repository ppa:cybermax-dexter/sdl2-backport

apt install --install-recommends winehq-stable

+Kubuntu - Upgrade (April 26, 2020, 10:05 a.m.)

Test correct version is found:
do-release-upgrade -c


Upgrade in case the correct version is shown:
do-release-upgrade

---------------------------------------------------------------------------------

upgrade to the development state:

do-release-upgrade -d

---------------------------------------------------------------------------------

if you wish to do the entire upgrade in a terminal:

do-release-upgrade -m desktop

---------------------------------------------------------------------------------

+OpenConnect VPN Server (April 24, 2020, 10:03 p.m.)

https://www.linuxbabe.com/ubuntu/openconnect-vpn-server-ocserv-ubuntu-16-04-17-10-lets-encrypt

---------------------------------------------------------------

1- apt install ocserv



2- Check its status:
systemctl status ocserv

If not started, use the following command to start the service:
systemctl start ocserv

By default OpenConnect VPN server listens on TCP and UDP port 443. If it’s being used by the web server, then the VPN server can’t be started. (Fix this problem in step 5).



3- Installing Let’s Encrypt Client (Certbot):
apt install software-properties-common
add-apt-repository ppa:certbot/certbot
apt update
apt install certbot



4- Obtaining a TLS Certificate from Let’s Encrypt:
If there’s no web server running on your Ubuntu 16.04/18.04 server and you want OpenConnect VPN server to use port 443, then you can use the standalone plugin to obtain TLS certificate from Let’s Encrypt. Run the following command. Don’t forget to set A record for your domain name.

sudo certbot certonly --standalone --preferred-challenges http --agree-tos --email your-email-address -d vpn.example.com



5- If you had a problem in step 2, follow this step. If not, skip this step.

If your server has a web-server listening on port 80 and 443, and you want OpenConnect VPN server to use a different port, then it’s a good idea to use the webroot plugin to obtain a certificate because the webroot plugin works with pretty much every web server and we don’t need to install the certificate in the web server.

First, you need to create a virtual host for vpn.example.com.
Nginx

If you are using Nginx, then

sudo nano /etc/nginx/conf.d/vpn.example.com.conf

Paste the following lines into the file.

server {
listen 80;
server_name vpn.example.com;

root /var/www/vpn.example.com/;

location ~ /.well-known/acme-challenge {
allow all;
}
}

Save and close the file. Then create the web root directory.

sudo mkdir -p /var/www/vpn.example.com

Set www-data (Nginx user) as the owner of the web root.

sudo chown www-data:www-data /var/www/vpn.example.com -R

Reload Nginx for the changes to take effect.

sudo systemctl reload nginx

Once virtual host is created and enabled, run the following command to obtain Let’s Encrypt certificate using webroot plugin.

sudo certbot certonly --webroot --agree-tos --email your-email-address -d vpn.example.com -w /var/www/vpn.example.com



6- Editing OpenConnect VPN Server Configuration File:
vim /etc/ocserv/ocserv.conf

Comment the line:
auth = "pam[gid-min=1000]"
Uncomment and edit two lines below it, to:
auth = "plain[passwd=./ocpasswd]"


server-cert = /etc/letsencrypt/csr/0000_csr-certbot.pem
server-key = /etc/letsencrypt/keys/0000_key-certbot.pem

try-mtu-discovery = true

default-domain = vpn.mohsenhassani.com

ipv4-network = 10.10.10.0

tunnel-all-dns = true

dns = 8.8.8.8

Comment out all the route parameters:
route = 10.10.10.0/255.255.255.0
route = 192.168.0.0/255.255.0.0
route = fef4:db8:1000:1001::/64
no-route = 192.168.5.0/255.255.255.0


Save and close the file Then restart the VPN server for the changes to take effect.
systemctl restart ocserv



7- Fixing DTLS Handshake Failure:
cp /lib/systemd/system/ocserv.service /etc/systemd/system/ocserv.service
vim /etc/systemd/system/ocserv.service

Comment out the following two lines:
Requires=ocserv.socket
Also=ocserv.socket

Save and close the file. Then reload systemd:
systemctl daemon-reload

Stop ocserv.socket and disable it:
systemctl stop ocserv.socket
systemctl disable ocserv.socket

Restart ocserv service:
systemctl restart ocserv.service

Check the status:
systemctl status ocserv



8- Creating VPN Accounts using the ocpasswd tool:
ocpasswd -c /etc/ocserv/ocpasswd mohsen

+iptables (April 23, 2020, 9:02 p.m.)

Delete prerouting rule:


1- List NAT rules:
iptables -t nat -v -L -n --line-number


2- Delete a NAT rule:
iptables -t nat -D POSTROUTING 1

-------------------------------------------------------------------

+PPTP / L2TP - Descriptions (April 17, 2020, 11:12 a.m.)

PPTP or Point-to-Point Tunneling Protocol is an outdated method for implementing VPNs.

It is developed by Microsoft and the easiest protocol to configure. PPTP VPN has low overhead and that makes it faster than other VPN protocols.

PPTP VPN encrypts data using 128-bit encryption which makes it the fastest but the weakest in terms of security.

When you use a VPN connection, it usually affects your Internet speeds due to the encryption process. However, you don’t have to worry about that when using a PPTP VPN because of its low-level encryption.

----------------------------------------------------------------------

L2TP or Layer 2 Tunneling Protocol (L2TP) is the result of a partnership between Cisco and Microsoft. It was created to provide a more secure VPN protocol than PPTP.

L2TP is a tunneling protocol like PPTP that allows users to access the common network remotely.

L2TP VPN is a combined protocol that has all the features of PPTP, but runs over a faster transport protocol (UDP) thus making it more firewall-friendly.

It encrypts data using 256-bit encryption and therefore uses more CPU resources than PPTP. However, the increased overhead required to manage this security protocol makes it perform slower than PPTP.

----------------------------------------------------------------------

+Limit network bandwidth (March 11, 2020, 9:43 a.m.)

apt install wondershaper

wondershaper eth1 256 128

wondershaper clear eth1

--------------------------------------------------------------------

+Enable /etc/rc.local (March 9, 2020, 11:02 a.m.)

1- Create the following file:
vim /etc/systemd/system/rc-local.service



2- Add the following content to it:
[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target



3- Create the rc.local file:
printf '%s\n' '#!/bin/bash' 'exit 0' | sudo tee -a /etc/rc.local



4- Then add execute permission to /etc/rc.local file:
chmod +x /etc/rc.local



5- Enable and start the service on system boot:
systemctl enable rc-local
systemctl start rc-local

+Packages (March 3, 2020, 11:15 a.m.)

balena etcher

+Command History (Feb. 24, 2020, 11:21 a.m.)

history n
Shows the stuff typed – add a number to limit the last n items

----------------------------------------------------------------------

Ctrl + r
Interactively search through previously typed commands

----------------------------------------------------------------------

![value]
Execute the last command typed that starts with ‘value’

----------------------------------------------------------------------

![value]:p
Print to the console the last command typed that starts with ‘value’

----------------------------------------------------------------------

!!
Execute the last command typed

----------------------------------------------------------------------

!!:p

Print to the console the last command typed

----------------------------------------------------------------------

+Chaining Commands (Feb. 24, 2020, 11:16 a.m.)

commandA; commandB

Run command A and then B, regardless of the success of A

----------------------------------------------------------------------

commandA && commandB

Run command B if A succeeded

----------------------------------------------------------------------

commandA || commandB

Run command B if A failed

----------------------------------------------------------------------

commandA

Run command A in background

----------------------------------------------------------------------

+Terminal Shortcuts (Feb. 24, 2020, 10:53 a.m.)

Controlling the Screen:


Ctrl+S:
Stop all output to the screen. This is particularly useful when running commands with a lot of long, verbose output, but you don’t want to stop the command itself with Ctrl+C.

Ctrl+Q: Resume output to the screen after stopping it with Ctrl+S

--------------------------------------------------------------------------------

Moving the Cursor:


Ctrl+A or Home: Go to the beginning of the line.

Ctrl+E or End: Go to the end of the line.

Alt+B: Go left (back) one word.

Ctrl+B: Go left (back) one character.

Alt+F: Go right (forward) one word.

Ctrl+F: Go right (forward) one character.

Ctrl+XX: Move between the beginning of the line and the current position of the cursor. This allows you to press Ctrl+XX to return to the start of the line, change something, and then press Ctrl+XX to go back to your original cursor position. To use this shortcut, hold the Ctrl key and tap the X key twice.

--------------------------------------------------------------------------------

Deleting Text:


Ctrl+D or Delete: Delete the character under the cursor.

Alt+D: Delete all characters after the cursor on the current line.

Ctrl+H or Backspace: Delete the character before the cursor.

--------------------------------------------------------------------------------

Fixing Typos:


Alt+T: Swap the current word with the previous word.

Ctrl+T: Swap the last two characters before the cursor with each other. You can use this to quickly fix typos when you type two characters in the wrong order.

Ctrl+_: Undo your last keypress. You can repeat this to undo multiple times.


--------------------------------------------------------------------------------

Cutting and Pasting:


Ctrl+W: Cut the word before the cursor, adding it to the clipboard.

Ctrl+K: Cut the part of the line after the cursor, adding it to the clipboard.

Ctrl+U: Cut the part of the line before the cursor, adding it to the clipboard.

Ctrl+Y: Paste the last thing you cut from the clipboard. The y here stands for “yank”.

--------------------------------------------------------------------------------

Working With Your Command History:


Ctrl+P or Up Arrow:
Go to the previous command in the command history. Press the shortcut multiple times to walk back through history.

Ctrl+N or Down Arrow:
Go to the next command in the command history. Press the shortcut multiple times to walk forward through the history.

Alt+R: Revert any changes to a command you’ve pulled from your history if you’ve edited it.


Ctrl+R:
Recall the last command matching the characters you provide. Press this shortcut and start typing to search your bash history for a command.

Ctrl+O: Run a command you found with Ctrl+R.

Ctrl+G: Leave history searching mode without running a command.

--------------------------------------------------------------------------------

reset
Resets the terminal display

--------------------------------------------------------------------------------

+VNC Server (Feb. 5, 2020, 3:13 p.m.)

Try this method first, the second method has been bumped into dark screen problem:

1- Install VNC server on server machine
apt install x11vnc


2- Run the GUI or command line x11vnc application/command.


3- Install vnc viewer on the client machine (Windows or Linux) and connect to the IP:port


================== Second Method ==================

1- apt install vnc4server


2- With normal linux user, enter the following command and set a password:
$ vncserver


3- vim /etc/vnc.conf
$localhost = "no";
$vncStartup = "$ENV{HOME}/.vnc/xstartup";


4- Create a file in ~/.vnc/xstartup with the following content:

vim ~/.vnc/xstartup

#!/bin/sh

unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startx

--------------------------------------------------

vncserver -kill :1

--------------------------------------------------

vncserver -list :*

--------------------------------------------------

+CUDA (Feb. 4, 2020, 3:35 p.m.)

http://developer.download.nvidia.com/compute/cuda/repos/
https://developer.nvidia.com/cuda-toolkit

-------------------------------------------------------------------------

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin

mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600

wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb

dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb

apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub

apt update

apt install cuda

-------------------------------------------------------------------------

+Nvidia GPU Drivers for Tensorflow (Feb. 3, 2020, 3:08 p.m.)

1- Download and nvidia machine learning repo package:

wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb

dpkg -i nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb

-----------------------------------------------------------------------------

2- Download nvidia cuda repo package:

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.2.89-1_amd64.deb

dpkg -i cuda-repo-ubuntu1804_10.2.89-1_amd64.deb

-----------------------------------------------------------------------------

+Test GPU (Feb. 3, 2020, 2:30 p.m.)

Run google-chrome and navigate to the URL about:gpu. If chrome has figured out how to use OpenGL, you will get extremely detailing information about your GPU.

-----------------------------------------------------------------------

cat /proc/driver/nvidia/gpus/*/information

-----------------------------------------------------------------------

lspci | grep ' VGA ' | cut -d" " -f 1

-----------------------------------------------------------------------

lspci -v -s $(lspci | grep ' VGA ' | cut -d" " -f 1)

-----------------------------------------------------------------------

nvidia-smi --list-gpus

nvidia-smi -q

-----------------------------------------------------------------------

+Nvidia Drivers (Feb. 3, 2020, 10:11 a.m.)

1- Enable the non-free repository.

vim /etc/apt/sources.list
deb http://deb.debian.org/debian/ buster main non-free


2- Update the repository index files and install nvidia-detect utility:
apt update
apt install nvidia-detect


3- Detect your Nvidia card model and suggested Nvidia driver:
# nvidia-detect


4- As suggested install the recommended driver by the previous step:
apt install nvidia-driver

5- Reboot:
systemctl reboot

-----------------------------------------------------------------

1- Search and download the driver file from Nvidia website:
https://www.nvidia.com/Download/index.aspx?lang=en-us


2- apt install build-essential linux-headers-`uname -r`


3- bash NVIDIA-Linux-x86_64-440.44.run (The file you downloaded in step 1)

-----------------------------------------------------------------

+Motherboard (Feb. 2, 2020, 10:44 a.m.)

To find your motherboard model, use dmidecode or inxi command:

dmidecode -t baseboard | grep -i 'Product'

-------------------------------------------------------------------------

apt install inxi

inxi -M

-------------------------------------------------------------------------

+VGA / GPU (Feb. 2, 2020, 10:33 a.m.)

Fetch details about graphics unit (vga card or video card)

lspci -vnn | grep VGA -A 12

-------------------------------------------------------------------------

apt install lshw

lshw -numeric -C display

lshw -class display

lshw -short | grep -i --color display

-------------------------------------------------------------------------

+eyeD3 (Jan. 10, 2020, 6:53 p.m.)

eyeD3 is a Python tool for working with audio files, specifically MP3 files containing ID3 metadata. It provides a command-line tool (eyeD3) and a Python library (import eyed3) that can be used to write your own applications or plugins that are callable from the command-line tool.

----------------------------------------------------------------

It's better to use a virtualenv for installing eyeD3 and its plugins (if you need any):
create and activate a virtualenv with Python 3, then install eyeD3 and its "display" plugin:

pip install eyed3[display-plugin]

----------------------------------------------------------------

For example, to set some song information in an mp3 file called song.mp3:

$ eyeD3 -a Integrity -A "Humanity Is The Devil" -t "Hollow" -n 2 song.mp3


With this command, we’ve set the artist (-a/--artist), album (-A/--album), title (-t/--title), and track number (-n/--track-num) properties in the ID3 tag of the file.

----------------------------------------------------------------

eyeD3 song.mp3

The same can be accomplished using Python.

import eyed3
audiofile = eyed3.load("song.mp3")
audiofile.tag.artist = u"Integrity"
audiofile.tag.album = u"Humanity Is The Devil"
audiofile.tag.album_artist = u"Integrity"
audiofile.tag.title = u"Hollow"
audiofile.tag.track_num = 2
audiofile.tag.save()

----------------------------------------------------------------

Rename mp3 files to their titles and prepend the index number:

files=(*.mp3)
i=0

for file in "${files[@]}"; do
i=$(( i + 1 ))
eyeD3 --rename ''"$i"'- $title' $file
done

----------------------------------------------------------------

https://eyed3.readthedocs.io/en/latest/plugins/display_plugin.html

Display title: (you need "display" plugin to be installed")

eyeD3 -P display -p %t%

eyeD3 -P display -p %title%

----------------------------------------------------------------

+Watch (Jan. 9, 2020, 1:12 p.m.)

watch -d -n 0.2 du -sh

-----------------------------------------------------------------

-d

highlights the changes in the command output.

-----------------------------------------------------------------

-n, --interval <secs>

seconds to wait between updates

-----------------------------------------------------------------

-t, --no-title

turn off header

-----------------------------------------------------------------

+Send Remote Commands Via SSH (Jan. 6, 2020, 5:07 p.m.)

ssh mohsen@mohsenhassani.com 'ls -l'


ssh mohsen@mohsenhassani.com 'ls -l; ps -aux; whoami'


ssh -t mohsen@mohsenhassani.com 'top'

The -t flag tells ssh that you'll be interacting with the remote shell. Without the -t flag top will return results after which ssh will log you out of the remote host immediately. With the -t flag, ssh keeps you logged in until you exit the interactive command. The -t flag can be used with most interactive commands, including text editors like pico and vi.

+Remap keyboard keys (Dec. 22, 2019, 5:43 p.m.)

1- run xev in terminal


2- You need to get the code of the key you intend to switch. So after runing xev press the key you want to switch and note the keycode.


3- Suppose you want to the change that key with left shift. So using the following example, get the name of the left shift command:
xmodmap -pke | grep -i shift


4- Now you can change the key functionality with the following command:
xmodmap -e "keycode 94 = Shift_L"


5- To make this change permanent, you need to put the command in ~/.profile
vim ~/.profile
xmodmap -e "keycode 94 = Shift_L"

+MkDocs (Nov. 5, 2019, 3:35 p.m.)

https://www.mkdocs.org/


1- apt install mkdocs


2- Create new MkDocs project:
mkdocs new my_project
cd my_project


3-
mkdocs serve
Open up http://127.0.0.1:8000/


4- Building the site:
mkdocs build

----------------------------------------------------------------

Change development address:
dev_addr: '127.0.0.1:8001'

----------------------------------------------------------------

Configuration:
https://www.mkdocs.org/user-guide/configuration/

----------------------------------------------------------------

Installing a new theme:

https://github.com/mkdocs/mkdocs/wiki/MkDocs-Themes

----------------------------------------------------------------

Serve in remote host:
mkdocs serve -a 0.0.0.0:8000

----------------------------------------------------------------

Markdown documentation:
https://yakworks.github.io/mkdocs-material-components/cheat-sheet/

1- Emphasis:

_italic_
__bold__
^^underline^^
~~strike through~~
==highlight==
`inline code`
==*you* **can** ^^combine^^ `too`==



2- Soft & Hard Line Breaks:

Put 2 spaces at the end of a line to force a line break.
You can also force a break anywhere using the <br> tag.



3- Lists:

* need a blank line above to start a new list
+ valid bullet symbols
+ `*`, `-` or '+'
- 4 spaces or 1 tab
- to indent

1. use *numbers* for ordered
* can nest
2. **numbers** can be in order
3. can also nest
1. but it will fix them if not

- list item with two paragraphs.

anything like this paragraph
should be indented by 4 spaces
or a tab

- you can add blocks too

> :memo:
>
> * list under lists
> * under lists





4- Tasks:

- [ ] Task Lists `- [ ]`
- [x] x instead of space
- [x] will mark it complete
- [ ] work just like lists
* can can contain indents
* or anything else a list can

1. Or can be nested under others lists
- [ ] like this
- [ ] and this

2. This can help
- [ ] like this
- [ ] and this




5- Links:

[simple link](https://www.google.com )
[with optional title](https://www.google.com "Google's Homepage")
point to a [relative file or md](./embedding/lucid.md) or
mail link with emoji [📧](mailto:joshdev@9ci.com) or
click this cloud icon to see the list of icon options
[_cloud_{.icon}](https://material.io/icons/)

or [use an image ![](images/dingus/image-small.png)](images/dingus/image.png)

[Reference-Style Links][some reference id]
put link at bottom of paragraph or page.
you can use numbers or text for
[reference-style link definitions][1]
or leave it empty and
just use the [link text itself]

to [open in new tab](sandbox.md){.new-tab}
use `{target=_blank} or {.new-tab}` attributes
use it on [ref links][new tab]{.new-tab} too

Indenting _reference links_
2 spaces is not required
but a recommended convention

[some reference id]: https://daringfireball.net/projects/markdown/syntax#link
[1]: http://reason.com/blog
[link text itself]: ./images/material.png
[new tab]: sandbox.md




6- Images:

inline ![](images/dingus/image-small.png)
with alt text ![foo](images/dingus/image-small.png)
with ref links ![img-small][]
can use [sizing attributes](blocks/#sizing-alignment)

Put `zoomify` in the alt text bracket to enable
clicking to zoom. Try clicking on any of
these images ![zoomify][img-dingus]{.tiny}

![zoomify](images/dingus/image.png){.center .xsmall}

> :camera: **Figure Title**
> ![zoomify](images/dingus/image.png){.center .small}

[img-small]: ./images/dingus/image-small.png
[img-dingus]: ./images/dingus/image.png





7- Abbreviations:

here are some abbr's
HTML and FUBAR

>:bulb: if your editor gets confused by
not having and enclosing * then
just add it to end of abbr def.

---

>:warning: Don't indent these, doesn't seem to work

*[abbr]: Abbreviations
*[def]: Definition
*[HTML]: Hyper Text Markup Language
*[FUBAR]: You know what it means*



8- Footnotes:

Footnotes[^1] work like reference links
They auto-number like ordered lists[^3]
You can use any
reference id[^text reference]
like ref links they can be
organized at bottom
of paragraph or page.

[^1]: footnote, click the return icon here to go back ->
[^3]: the number will not necessarily be what you use
[^text reference]: text reference




9- Tables:

Colons can be used to align columns.
3 dashes min to separate headers.
Outer pipes (|) are optional,
and you don't need to make the
raw Markdown line up prettily.
You can also use inline Markdown.

| Tables | Are | Cool |
| -------- |:-------------:| ---------:|
| col 3 is | right-aligned | $1600 |
| col 2 is | centered | $12 |
| | **Total** | **$1612** |

==Table== | **Format** | 👀 _scramble_
--- | --- | ---
*Still* | `renders` | **nicely**
[with links](images/dingus/image-small.png) | images ![zoomify](images/dingus/image-small.png){.tiny} | emojis 🍔
icons _cloud_{.icon} | footnotes[^1] | use `<br>` <br> for multi-line <br> line breaks

Colons can be used to align columns. 3 dashes min to separate headers. Outer pipes (|) are optional, and you don't need to make the raw Markdown line up prettily. You can also use inline Markdown.
Tables Are Cool
col 3 is right-aligned $1600
col 2 is centered $12
Total $1612
Table




10- Blockquotes:

> Blockquotes are handy to callout text.
they are greedy and will keep
grabbing text. The '>' is optional unless trying join
>
paragraphs, tables etc.

a blank line and a new paragraph
or other markdown thing end them

>:bulb:
use a `---` seperator or `<br>`
if you want multiple sepearte block quotes

---

> can have nested
> > blockquotes inside of block quotes
block quotes can also contain any valid markdown





11- Blocks - admonitions, callouts, sidebars:

> :memo: **Memo Admonition**
use blockquotes
with emoji indicators for
admonition memos, callout etc..

---

> :boom:
Title title like above is optional

---

> :bulb: See [the section about blocks](blocks.md#cheatsheet)
for the list of emojis that can be used.





12- Row Divs:

<div markdown="1" class="two-column">




13- Headings & Breaks:

# h1 Heading
## h2 Heading
### h3 Heading
#### h4 Heading

Horizontal Rules
---

----------------------------------------------------------------

Material Design:
https://squidfunk.github.io/mkdocs-material/
https://yakworks.github.io/mkdocs-material-components/cheat-sheet/

pip install mkdocs-material


Sample Config:

# Configuration
theme:
name: 'material'
palette:
primary: 'purple'
accent: 'purple'
feature:
tabs: true

# Extensions
markdown_extensions:
- admonition
- codehilite:
guess_lang: false
- toc:
permalink: true


----------------------------------------------------------------

Deployment:

mkdocs build

A folder named "site" will be created. Zip and scp it to the server and serve it using Nginx or any other web servers.

If you don't want the output files to get built in the "site" directory, set another name for site_dir configuration option, in mkdocs.yml file.

----------------------------------------------------------------

+Set up PPTP VPN Server (Nov. 1, 2019, 7:41 p.m.)

1- Install pptpd and a toolkit to save iptables-rules:
apt install pptpd iptables-persistent -y



2- Edit the file /etc/ppp/pptpd-options

And comment the following lines:
refuse-pap
refuse-chap
refuse-mschap
require-mschap-v2
require-mppe-128
ms-dns 8.26.56.26
ms-dns 8.20.247.20



3- Add VPN User Accounts:
vim /etc/ppp/chap-secrets

Add the user and password as follows. Use the tab key to separate them.
mohsen pptpd my-password *
OR
mohsen l2tpd my-password *



4- Allocate Private IP for VPN Server and Clients:
vim /etc/pptpd.conf

Add the following lines to the end of the file.
localip 10.0.4.1
remoteip 10.0.4.2-200



5- Enable IP Forwarding:
vim /etc/sysctl.conf

Add the following line:
net.ipv4.ip_forward = 1
sysctl -p



6- Configure Firewall for IP Masquerading:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A POSTROUTING -t nat -o ppp+ -j MASQUERADE
# Enable IP forwarding
iptables -F FORWARD
iptables -A FORWARD -j ACCEPT
# Accept GRE packets
iptables -A INPUT -p 47 -j ACCEPT
iptables -A OUTPUT -p 47 -j ACCEPT
# Accept incoming connections to port 1723 (PPTP)
iptables -A INPUT -p tcp --dport 1723 -j ACCEPT
# Accept all packets via ppp* interfaces (for example, ppp0)
iptables -A INPUT -i ppp+ -j ACCEPT
iptables -A OUTPUT -o ppp+ -j ACCEPT



7- Save iptables rules for taking activating the VPN on each reboot:
iptables-save >/etc/iptables/rules.v4

vim etc/network/if-pre-up.d/iptables-restore-pptp
#!/bin/bash
/sbin/iptables-restore < /etc/iptables/rules.v4
Save the file.



8- Start pptpd Daemon:
service pptpd start
service pptpd stop
service pptpd restart
service pptpd status
update-rc.d pptpd enable



9- In order to verify that it is running and listening for incoming connections:

netstat -alpn | grep pptp

------------------------------------------------------------------------

Install the following packages on client system:
apt install pptp-linux network-manager-pptp

In network manager add a PPTP VPN.
You will only need the following information:
- Gateway: Which is the IP address of your VPN server.
- Login: Which is the username in the above chap-secrets file
- Password: Which is the password in the above chap-secrets file.

------------------------------------------------------------------------

Get PSK (Pre-shared key):

cat /etc/ipsec.d/passwd

user:$1$LFUJ14..$j/XsVjDvrLO2ov2sY32Lp1:xauth-psk

The (XsVjDvrLO2ov2sY32Lp1) part is PSK.

------------------------------------------------------------------------

+Network Manager Logs (Nov. 1, 2019, 6:12 p.m.)

journalctl -fu NetworkManager

+UFW - Uncomplicated Firewall (Oct. 29, 2019, 11:25 a.m.)

The default firewall configuration tool for Ubuntu is ufw. Developed to ease iptables firewall configuration, ufw provides a user friendly way to create an IPv4 or IPv6 host-based firewall. By default UFW is disabled.

------------------------------------------------------------------

ufw enable

ufw status verbose

ufw show raw

------------------------------------------------------------------

Allow:
ufw allow <port>/<optional: protocol>

To allow incoming tcp and udp packet on port 53
ufw allow 53

To allow incoming tcp packets on port 53
ufw allow 53/tcp

To allow incoming udp packets on port 53
ufw allow 53/udp


To allow packets from 207.46.232.182:
ufw allow from 207.46.232.182

ufw allow from 192.168.1.0/24

ufw allow from 192.168.0.4 to any port 22

ufw allow from 192.168.0.4 to any port 22 proto tcp

------------------------------------------------------------------

Deny:
ufw deny <port>/<optional: protocol>

To deny tcp and udp packets on port 53
ufw deny 53

To deny incoming tcp packets on port 53
ufw deny 53/tcp

To deny incoming udp packets on port 53
ufw deny 53/udp

Deny by specific IP:
ufw deny from 207.46.232.182

ufw deny from 192.168.0.1 to any port 22

------------------------------------------------------------------

Delete Existing Rule:
ufw delete deny 80/tcp

------------------------------------------------------------------

Services:

Allow by Service Name:
ufw allow <service name>
ufw allow ssh

Deny by Service Name:
ufw deny <service name>
ufw deny ssh

------------------------------------------------------------------

Status:
Checking the status of ufw will tell you if ufw is enabled or disabled and also list the current ufw rules that are applied to your iptables.

ufw status

------------------------------------------------------------------

Logging:

To enable logging use:
ufw logging on

To disable logging use:
ufw logging off

------------------------------------------------------------------

+httrack (Oct. 19, 2019, 1:09 a.m.)

1- Installation:
apt install httrack


2- Usage:
httrack https://songslover.app/best-of-year/v-a-best-of-2018.html -r2 '-*' '+*mp3' -X0 --update

+Radio Streaming Apps (Feb. 20, 2019, 10:32 a.m.)

Cantata

apt install cantata mpd

Favorite List file location:
.local/share/data/cantata/mpd/playlists/

-----------------------------------------------------------

Odio

apt install snapd
snap install odio

-----------------------------------------------------------

Lollypop

add-apt-repository ppa:gnumdk/lollypop
apt update
apt install lollypop

If not found, maybe it's "lollypop-xenial". Do an apt-cache search lollypop to find the correct name.

-----------------------------------------------------------

Guayadeque

add-apt-repository ppa:anonbeat/guayadeque
apt-get update
apt install guayadeque

-----------------------------------------------------------

+CentOS - yum nogpgcheck (July 7, 2019, 9:39 p.m.)

yum --nogpgcheck localinstall packagename.arch.rpm

+CentOS - EPEL (July 7, 2019, 7 p.m.)

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

+CentOS - Check version (July 7, 2019, 6:56 p.m.)

rpm -q centos-release

+SMB (June 26, 2019, 9:27 p.m.)

apt install smbclient

-------------------------------------------------------------

List all shares:
smbclient -L <IP Address> -U Mohsen

Connect to a Disk or other services:
smbclient //<IP Address>/<Disk or Service Name> -U Mohsen

-------------------------------------------------------------

To copy the file from the local file system to the SMB server:
smb: \> put local_file remote_file

To copy the file from the SMB server to the local file system:
smb: \> get remote_file local_file

-------------------------------------------------------------

+aria2c (April 26, 2018, 10:55 a.m.)

aria2c -d ~/Downloads/ -i ~/Downloads/dl.txt --summary-interval=20 --check-certificate=false -c -x16 -s16 -j1

For limiting speed add:
--max-overall-download-limit=1400K

------------------------------------------------------------

Rename after download:

out=<name.extension>

------------------------------------------------------------

+Download dependencies and packages to directory (June 24, 2019, 1:38 p.m.)

1- In server with no Internet:
apt-get --print-uris --yes install <my_package_name> | grep ^\' | cut -d\' -f2 > downloads.list

2- Download the links from another server with Internet connection:
wget --input-file downloads.list

3- Copy the files to the location /var/cache/apt/archives in destination server.

4- Install the package using apt install.

+Change/Rename username/group (June 16, 2019, 5:13 p.m.)

usermod -l new-name old-name

groupmod -n new-group old-group

-------------------------------------------------------------------

If following error occurred:
usermod: user tom is currently used by process 123:

pkill -u old_name 123
pkill -9 -u old_name

-------------------------------------------------------------------

+rsync (May 5, 2018, 11:26 a.m.)

--delete : delete files that don't exist on sender (system)
-v : Verbose (try -vv for more detailed information)
-e "ssh options" : specify the ssh as remote shell
-a : archive mode
-r : recurse into directories
-z : compress file data

---------------------------------------------------------------------------

rsync -civarzhne 'ssh -p 22' --no-g --no-p --delete --force --exclude-from 'fair/rsync' fair root@fair.mohsenhassani.ir:/srv/

---------------------------------------------------------------------------

rsync -arvb --exclude-from 'my_project/rsync-exclude-list.txt' --delete --backup-dir='my_project/my_project/rsync-deletions' -e ssh my_project mohsen@mohsenhassani.com:/srv/

---------------------------------------------------------------------------

rsync -varPe 'ssh' --ignore-existing mohsenhasani.com:~/temp/music/* /home/mohsen/Audio/Music/Unsorted/music/

---------------------------------------------------------------------------

Exclude files and folders:

Files:
--exclude 'sources.txt'
--exclude '*.pyc'

Directories:
--exclude '/static'
--exclude 'abc*'

Together:
--exclude 'sources.txt' --exclude 'abc*'

---------------------------------------------------------------------------

-a = recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files.

-v = verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of?

--delete = This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

l = preserves any links you may have created.

--progress = shows the progress of each file transfer. Can be useful to know if you have large files being backup up.

--stats = Adds a little more output regarding the file transfer status.

-I, --ignore-times
Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this "quick check" behavior, causing all files to be updated.

-b, --backup
With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. Note that if you don’t specify --backup-dir, (1) the --omit-dir-times option will be implied, and (2) if --delete is also in effect (without --delete-excluded), rsync will add a "protect" filter-rule for the backup suffix to the end of all your existing excludes (e.g. -f "P *~"). This will prevent previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g., if your rules specify a trailing inclusion/exclusion of ’*’, the auto-added rule would never be reached).

--backup-dir=DIR
In combination with the --backup option, this tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). Note that if you specify a relative path, the backup directory will be relative to the destination directory, so you probably want to specify either an absolute path or a path that starts
with "../". If an rsync daemon is the receiver, the backup dir cannot go outside the module’s path hierarchy, so take extra care not to delete it or copy into it.

--suffix=SUFFIX
This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string.

-u, --update
This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file’s, it will be updated if the sizes are different.) Note that this does not affect the copying of symlinks or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a transfer rule, not an exclude, so it doesn’t affect the data that goes into the file-lists, and thus it doesn’t affect deletions. It just limits the files that the receiver requests to be transferred.

---------------------------------------------------------------------------

+Shadowsocks - Proxy tool (May 13, 2018, 9:25 p.m.)

Server Installation:

(Use python 2.7)
1- pip install shadowsocks
(You can create a virtualenv if you want.)


2- Create a file /etc/shadowsocks.json:
{
"server": "[server ip address]",
"port_password": {
"8381": "Mohsen123",
"8382": "Mohsen321",
"8383": "MoMo"
},
"local_port": 1080,
"timeout": 600,
"method": "aes-256-cfb"
}


3- ssserver --manager-address /var/run/shadowsocks-manager.sock -c /etc/shadowsocks.json start
(If you installed shadowsocks in a virtualenv, you need to "activate" it to see the command "ssserver")

If you got error like this:
AttributeError: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: undefined symbol: EVP_CIPHER_CTX_cleanup
Refer to the bottom of this note for solution!

If you got these errors:
[Errno 98] Address already in use
can not bind to manager address
Delete the file in:
rm /var/run/shadowsocks-manager.sock


4- Open Firewall Port to Shadowsocks Client for each ports defined at the above json file:
ufw allow proto tcp to 0.0.0.0/0 port 8381 comment "Shadowsocks server listen port"
Do the same for other ports too, 8382, 8383, etc

5- Automatically Start Shadowsocks Service:
put the whole line in step 3 in the file /etc/rc.local

---------------------------------------------------------------------

Client Installation: (Linux)

1- pip install shadowsocks
(You can create a virtualenv if you want.)


2- Create a file /etc/shadowsocks.json with the exact content from step 2 of "Server Installation".

3- sslocal -c /etc/shadowsocks.json
(If you installed shadowsocks in a virtualenv, you need to "activate" it to see the command "sslocal")

---------------------------------------------------------------------

Client Installation: (Android)

Install the Shadowsocks app from the link below:
https://play.google.com/store/apps/details?id=com.github.shadowsocks

---------------------------------------------------------------------

If you got error like this:
AttributeError: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: undefined symbol: EVP_CIPHER_CTX_cleanup

Open the file:
vim /usr/local/lib/python2.7/dist-packages/shadowsocks/crypto/openssl.py

Replace "cleanup" with "reset" in line 52:
libcrypto.EVP_CIPHER_CTX_cleanup.argtypes = (c_void_p,)
libcrypto.EVP_CIPHER_CTX_reset.argtypes = (c_void_p,)

And also replace "cleanup" with "reset" in line 111:
libcrypto.EVP_CIPHER_CTX_cleanup
with:
libcrypto.EVP_CIPHER_CTX_reset

---------------------------------------------------------------------

+Check if a disk is an SSD or an HDD (Dec. 18, 2018, 9:21 a.m.)

cat /sys/block/sda/queue/rotational

You should get the value 0 for an SSD

--------------------------------------------------------------

lsblk -d -o name,rota

This will return either 0 (for rotational speed false, meaning SSD) or 1 (for rotating drives, meaning non-SSD)

--------------------------------------------------------------

Verify VPS provided is on SSD drive:

dd if=/dev/zero of=/tmp/basezap.img bs=512 count=1000 oflag=dsync

This command should take only a few seconds if it is an SSD. If it took longer, it is a normal hard disk.

--------------------------------------------------------------

time for i in `seq 1 1000`; do
dd bs=4k if=/dev/sda count=1 skip=$(( $RANDOM * 128 )) >/dev/null 2>&1;
done

--------------------------------------------------------------

+ffmpeg (May 10, 2019, 4:30 p.m.)

Cut Movies:
ffmpeg -i 4.VOB -ss 00:14 -t 02:11 -c copy cut2.mp4

---------------------------------------------------------

Resize resolution:
ffmpeg -i input.mp4 -s 640x480 -b:v 1024k -vcodec mpeg4 -acodec copy input.mp4


List of all formats & codes supported by ffmpeg:
ffmpeg -formats

ffmpeg -codecs

---------------------------------------------------------

Converting mp4 to mp3:

ffmpeg -i video.mp4 -vn -acodec libmp3lame -ac 2 -qscale:a 4 -ar 48000 audio.mp3

---------------------------------------------------------

Merge audio & video:

ffmpeg -i video.mp4 -i audio.mp3 -c:v copy -c:a mp3 -strict experimental output.mp4

---------------------------------------------------------

m2t to mp3:
ffmpeg -i mohsen.m2t -f mp3 -acodec mp3 -ab 320 -ar 44100 -vn mohsen.mp3

---------------------------------------------------------

---------------------------------------------------------

+OpenVPN (Nov. 18, 2018, 9:52 a.m.)

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04

=================== Server Configuration ===================

1- apt install openvpn easy-rsa

2- make-cadir /var/openvpn-ca

3- Build the Certificate Authority:
cd /var/openvpn-ca
mv openssl-1.0.0.cnf openssl.cnf
source vars
./clean-all
./build-ca


4- Create the Server Certificate, Key, and Encryption Files:
./build-key-server server
When asked for "Sign the certificate" reply "y"
./build-dh


5- Generate an HMAC signature to strengthen the server's TLS integrity verification capabilities:
openvpn --genkey --secret keys/ta.key


6- Generate a Client Certificate and Key Pair:
./build-key user1


7- Copy the Files to the OpenVPN Directory:
cd keys
cp ca.crt server.crt server.key ta.key dh2048.pem /etc/openvpn
If the file "dh2048.pem" was not available, you can copy it from:
cp /usr/share/doc/openvpn/examples/sample-keys/dh2048.pem /etc/openvpn
or you might need to locate it.


8- Copy and unzip a sample OpenVPN configuration file into configuration directory:
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | tee /etc/openvpn/server.conf


9- Adjust the OpenVPN Configuration:
vim /etc/openvpn/server.conf

* Find the directive "tls-auth ta.key 0", uncomment it (if it's commented) and add "key-direction 0" below it.

* Find "cipher AES-256-CBC", uncomment it and add "auth SHA256" below it.

* Find and uncomment:
user nobody
group nogroup
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"


10- Allow IP Forwarding:
Uncomment the line "net.ipv4.ip_forward" in the file "vim /etc/sysctl.conf".
To read the file and adjust the values for the current session, type:
sysctl -p


11- Adjust the UFW Rules to Masquerade Client Connections:
Find the public network interface using:
ip route | grep default
The part after "dev" is the public network interface. We need it for next step.


12- Add the following lines to the the bottom of the file "/etc/ufw/before.rules":
There is a "COMMIT" at the end of the file. Do not delete or comment that "COMMIT".
Just add this block at the end of the file. Each "COMMIT" apply their own block rules.

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to server public network interface
-A POSTROUTING -s 10.8.0.0/8 -o <your_public_network_interface> -j MASQUERADE
COMMIT
# END OPENVPN RULES


13- Open the file "/etc/default/ufw":
Find "DEFAULT_FORWARD_POLICY="DROP"" and change "DROP" to "ACCEPT".


14- Open the OpenVPN Port and Enable the Changes:
ufw allow 1194/udp
ufw allow OpenSSH
ufw disable
ufw enable


15- Start and Enable the OpenVPN Service:
systemctl start openvpn@server
systemctl status openvpn@server

Also check that the OpenVPN tun0 interface is available:
ip addr show tun0


16- Enable the service so that it starts automatically at boot:
systemctl enable openvpn@server


17- Create the Client Config Directory Structure:
mkdir -p /var/client-configs/files
chmod 700 /var/client-configs/files


18- Copy an example client configuration:
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /var/client-configs/base.conf


19- Open the "/var/client-configs/base.conf" file and enter your server IP to the directive:
remote <your_server_ip> 1194

Uncomment:
user nobody
group nogroup

Comment:
# ca ca.crt
# cert client.crt
# key client.key

Add "auth SHA256" after the line "cipher AES-256-CBC"

Add "key-direction 1" somewhere in the file.

Add a few commented out lines:
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
If your client is running Linux and has an /etc/openvpn/update-resolv-conf file, you should uncomment these lines from the generated OpenVPN client configuration file.


20- Creating a Configuration Generation Script:
vim /var/client-configs/make_config.sh

Paste the following script:
#!/bin/bash

# First argument: Client identifier

KEY_DIR=/var/openvpn-ca/keys
OUTPUT_DIR=/var/client-configs/files
BASE_CONFIG=/var/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn


21- Mark the file as executable:
chmod 700 /var/client-configs/make_config.sh


22- Generate Client Configurations:
cd /var/client-configs/
./make_config.sh user1

If everything went well, we should have a "user1.ovpn" file in our "/var/client-configs/files" directory.


23- Transferring Configuration to Client Devices:
Use scp or any other methods to download a copy of the create "user1.ovpn" file to your client.


=================== Client Configuration ===================

24- Install the Client Configuration:
apt install openvpn


25- Check to see if your distribution includes a "/etc/openvpn/update-resolv-conf" script:
ls /etc/openvpn
If you see a file "update-resolve-conf":
Edit the OpenVPN client configuration file you transferred and uncomment the three lines we placed in to adjust the DNS settings.


26- If you are using CentOS, change the group from nogroup to nobody to match the distribution's available groups:


27- Now, you can connect to the VPN by just pointing the openvpn command to the client configuration file:
sudo openvpn --config user1.ovpn

+DVB - TV Card Driver (April 17, 2015, 7:49 p.m.)

This will install the driver automatically:

1- mkdir it9135 && cd it9135

2- wget http://www.ite.com.tw/uploads/firmware/v3.6.0.0/dvb-usb-it9135.zip

3- unzip dvb-usb-it9135.zip

4- dd if=dvb-usb-it9135.fw ibs=1 skip=64 count=8128 of=dvb-usb-it9135-01.fw

5- dd if=dvb-usb-it9135.fw ibs=1 skip=12866 count=5817 of=dvb-usb-it9135-02.fw

6- rm dvb-usb-it9135.fw

7- sudo install -D *.fw /lib/firmware

8- sudo chmod 644 /lib/firmware/dvb-usb-it9135* && cd .. && rm -rf it9135

9- sudo apt install kaffeine

After the above solution, you should be able to watch Channels via Kaffeine (or any other DVB Players). Just grab Kaffein, scan the frequencies and you should be fine!

-------------------------------------------------------------

If you had problems with the above solution, check the older method below:

http://nucblog.net/2014/11/installing-media-build-drivers-for-additional-tv-tuner-support-in-linux/

1- sudo apt-get install libproc-processtable-perl git libc6-dev

2- git clone git://linuxtv.org/media_build.git

3- cd media_build

4- $ ./build

5- sudo make install

6- apt-get install me-tv kaffeine

7- reboot for loading the driver (I don't know the driver for modprobe yet).

-----------------------------------------------------

Scan channels using Kaffein:

1- Open Kaffein

2- From `Television` menu, choose `Configure Television`.

3- From `Device 1` tab, from `Source` option, choose `Autoscan`

4- From `Television` menu choose `Channels`

5- Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.

-------------------------------------------------------------

Scan channels using Me-TV

1-Open Me-TV

2-When the scan dialog opens, choose `Czech Republic` from `Auto Scan`.

-------------------------------------------------------------

+sed - inline string replace (April 7, 2018, 6:29 p.m.)

echo "the old string . . . " | sed -e "s/old/new/g/"

+Install GRUB manually (March 9, 2018, 12:05 p.m.)

sudo mount /dev/sdax /mnt
sudo mount --bind /dev /mnt/dev
sudo mount --bind /dev/pts /mnt/dev/pts
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo chroot /mnt

update-initramfs -u
update-grub2

+Forwarding X (March 6, 2018, 7:55 p.m.)

1- Edit the file sshd_config:
vim /etc/ssh/sshd_config

X11Forwarding yes
X11UseLocalhost no

2- Restart ssh server:
/etc/init.d/ssh reload

3- Install xauth:
apt install xauth

4- SSH to the server:
ssh -X root@mohsenhassani.com

+Partitioning Error - Partition table entries are not in disk order (Feb. 13, 2018, 5:37 p.m.)

sudo gdisk /dev/sda
p (the p-command prints the recent partition-table on-screen)
s (the s-command sorts the partition-table entries)
p (use the p-command again to see the result on your screen)
w (write the changed partition-table to the disk)
q (quit gdisk)

+tcpdump (Jan. 13, 2018, 11:29 a.m.)

apt install tcpdump

sudo tcpdump -i any -n host 5.219.145.86
sudo tcpdump -nti any port 80

+Use cURL on specific interface (Jan. 9, 2018, 1:09 p.m.)

curl -o rootLast.tbz2 http://ftp.mohsenhassani.com/rootLast.tbz2 --interface eno2

+PDF Conversions (Nov. 6, 2017, 3:21 p.m.)

Installation:
apt install graphicsmagick-imagemagick-compat

-------------------------------------------------------------

Convert multiple images to a PDF file:
convert *.jpg aa.pdf

-------------------------------------------------------------

Convert a PDF file to images:

convert 1.pdf 1.jpg

For a single page:
convert 1.pdf[4] 1.jpg

-------------------------------------------------------------

If the following error occurred:
convert: not authorized `1.pdf' @ error/constitute.c/ReadImage/412.
convert: no images defined `1.jpg' @ error/convert.c/ConvertImageCommand/3210.

Solution:
This problems comes from a security update.
Edit the file: /etc/ImageMagick-6/policy.xml
Change "none" to "read|write" in the line:
<policy domain="coder" rights="read|write" pattern="PDF" />

-------------------------------------------------------------

+Add a New Disk to an Existing Linux Server (Oct. 25, 2017, 3:44 p.m.)

1- Check if the added disk is shown:
fdisk -l

2- For partitioning:
fdisk /dev/vdb
n
p
1
2048
+49G (For a 50G disk)
w
------------------------------
Now format the disk with mkfs command.
mkfs.ext4 /dev/vdb1

Make an entry in /etc/fstab file for permanent mount at boot time:
/dev/vdb1 /mnt/ftp ext4 defaults 0 0

+Clear Terminal Completely (Sept. 18, 2017, 6:13 p.m.)

clear && printf '\e[3J'

OR

printf '\33c\e[3J'

+Add SSH Private Key (Sept. 18, 2017, 5:01 p.m.)

ssh-add .ssh/id_rsa

If you got an error:
Could not open a connection to your authentication agent.

For fixing it run:
eval `ssh-agent -s`
OR
eval $(ssh-agent)

And then repeat the earlier command (ssh-add ....)

------------------------------------------------------------

Add SSH private key permanently:

Create a file ~/.ssh/config with the content:
IdentityFile ~/.ssh/id_mohsen

------------------------------------------------------------

+Commands - IP (Sept. 16, 2017, 5:29 p.m.)

Assign an IP Address to Specific Interface:
ip addr add 192.168.50.5 dev eth1

---------------------------------------------------------------------

Check an IP Address
ip addr show

---------------------------------------------------------------------

Remove an IP Address
ip addr del 192.168.50.5/24 dev eth1

---------------------------------------------------------------------

Enable Network Interface
ip link set eth1 up

---------------------------------------------------------------------

Disable Network Interface
ip link set eth1 down

---------------------------------------------------------------------

Check Route Table
ip route show

---------------------------------------------------------------------

Add Static Route
ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0

---------------------------------------------------------------------

Remove Static Route
ip route del 10.10.20.0/24

---------------------------------------------------------------------

Add Default Gateway
ip route add default via 192.168.50.1

---------------------------------------------------------------------

+Commands - Find (Sept. 12, 2017, 11:08 a.m.)

Find Files Using Name in Current Directory
find . -name mohsen.txt

----------------------------------------------------------

Find Files Under Home Directory
find /home -name mohsen.txt

----------------------------------------------------------

Find Files Using Name and Ignoring Case
find /home -iname mohsen.txt

----------------------------------------------------------

Find Directories Using Name
find / -type d -name Mohsen

----------------------------------------------------------

Find PHP Files Using Name
find . -type f -name mohsen.php

----------------------------------------------------------

Find all PHP Files in Directory
find . -type f -name "*.php"

----------------------------------------------------------

Find Files With 777 Permissions
find . -type f -perm 0777 -print

----------------------------------------------------------

Find Files Without 777 Permissions
find / -type f ! -perm 777

----------------------------------------------------------

Find SGID Files with 644 Permissions
find / -perm 2644

----------------------------------------------------------

Find Sticky Bit Files with 551 Permissions
find / -perm 1551

----------------------------------------------------------

Find SUID Files
find / -perm /u=s

----------------------------------------------------------

Find SGID Files
find / -perm /g=s

----------------------------------------------------------

Find Read Only Files
find / -perm /u=r

----------------------------------------------------------

Find Executable Files
find / -perm /a=x

----------------------------------------------------------

Find Files with 777 Permissions and Chmod to 644
find / -type f -perm 0777 -print -exec chmod 644 {} \;

----------------------------------------------------------

Find Directories with 777 Permissions and Chmod to 755
find / -type d -perm 777 -print -exec chmod 755 {} \;

----------------------------------------------------------

Find and remove single File
find . -type f -name "tecmint.txt" -exec rm -f {} \;

----------------------------------------------------------

Find and remove Multiple File
find . -type f -name "*.txt" -exec rm -f {} \;
OR
# find . -type f -name "*.mp3" -exec rm -f {} \;

----------------------------------------------------------

Find all Empty Files
find /tmp -type f -empty

----------------------------------------------------------

Find all Empty Directories
find /tmp -type d -empty

----------------------------------------------------------

File all Hidden Files
find /tmp -type f -name ".*"

----------------------------------------------------------

Find Single File Based on User
find / -user root -name mohsen.txt

----------------------------------------------------------

Find all Files Based on User
find /home -user mohsen

----------------------------------------------------------

Find all Files Based on Group
find /home -group developer

----------------------------------------------------------

Find Particular Files of User
find /home -user mohsen -iname "*.txt"

----------------------------------------------------------

Find Last 50 Days Modified Files
find / -mtime 50

----------------------------------------------------------

Find Last 50 Days Accessed Files
find / -atime 50

----------------------------------------------------------

Find Last 50-100 Days Modified Files
find / -mtime +50 –mtime -100

----------------------------------------------------------

Find Changed Files in Last 1 Hour
find / -cmin -60

----------------------------------------------------------

Find Modified Files in Last 1 Hour
find / -mmin -60

----------------------------------------------------------

Find Accessed Files in Last 1 Hour
find / -amin -60

----------------------------------------------------------

Find 50MB Files
find / -size 50M

----------------------------------------------------------

Find Size between 50MB – 100MB
find / -size +50M -size -100M

----------------------------------------------------------

Find and Delete 100MB Files
find / -size +100M -exec rm -rf {} \;

----------------------------------------------------------

Find Specific Files and Delete
find / -type f -name *.mp3 -size +10M -exec rm {} \;

----------------------------------------------------------

Find + grep

find . -type f -iname "*.py" -exec grep --exclude=./PC-Projects/* -Riwl 'sqlalchemy' {} \;

----------------------------------------------------------

find /var/mohsen_backups -name "*`date --date='-20 days' +%Y-%m-%d`.tar.gz" -exec rm {} +

----------------------------------------------------------

Files created/modified before the date "2019-05-07":
find . ! -newermt "2019-05-07"

After the date:
find . -newermt "2019-05-07"

Using datetime:
find . ! -newermt "2019-05-07 12:23:17"

Also:
find . -newermt "june 01, 2019"
find . -not -newermt "june 01, 2019"

find . -type f ! -newermt "June 01, 2019" -exec rm {} +

----------------------------------------------------------

find . -name "*.mp4" -exec mv {} videos/ \;

----------------------------------------------------------

+Commands - Netstat (Sept. 12, 2017, 11 a.m.)

netstat (network statistics)
---------------------------------------------------------
Listing all the LISTENING Ports of TCP and UDP connections
netstat -a
---------------------------------------------------------
Listing TCP Ports connections
netstat -at
---------------------------------------------------------
Listing UDP Ports connections
netstat -au
---------------------------------------------------------
Listing all LISTENING Connections
netstat -l
---------------------------------------------------------
Listing all TCP Listening Ports
netstat -lt
---------------------------------------------------------
Listing all UDP Listening Ports
netstat -lu
---------------------------------------------------------
Listing all UNIX Listening Ports
netstat -lx
---------------------------------------------------------
Showing Statistics by Protocol
netstat -s
---------------------------------------------------------
Showing Statistics by TCP Protocol
netstat -st
---------------------------------------------------------
Showing Statistics by UDP Protocol
netstat -su
---------------------------------------------------------
Displaying Service name with PID
netstat -tp
---------------------------------------------------------
Displaying Promiscuous Mode
netstat -ac 5 | grep tcp
---------------------------------------------------------
Displaying Kernel IP routing
netstat -r
---------------------------------------------------------
Showing Network Interface Transactions
netstat -i
---------------------------------------------------------
Showing Kernel Interface Table
netstat -ie
---------------------------------------------------------
Displaying IPv4 and IPv6 Information
netstat -g
---------------------------------------------------------
Print Netstat Information Continuously
netstat -c
---------------------------------------------------------
Finding non supportive Address
netstat --verbose
---------------------------------------------------------
Finding Listening Programs
netstat -ap | grep http
---------------------------------------------------------
Displaying RAW Network Statistics
netstat --statistics --raw
---------------------------------------------------------

+Reverse SSH Tunneling (Sept. 10, 2017, 3:08 p.m.)

1- SSH from the destination to the source (with public IP) using the command below:
ssh -R 19999:localhost:22 sourceuser@138.47.99.99
* port 19999 can be any unused port.

2- Now you can SSH from source to destination through SSH tunneling:
ssh localhost -p 19999

3- 3rd party servers can also access 192.168.20.55 through Destination (138.47.99.99).
Destination (192.168.20.55) <- |NAT| <- Source (138.47.99.99) <- Bob's server

3.1 From Bob's server:
ssh sourceuser@138.47.99.99

3.2 After the successful login to Source:
ssh localhost -p 19999

The connection between destination and source must be alive at all time.
Tip: you may run a command (e.g. watch, top) on Destination to keep the connection active.

+Auto Mount Hard Disk using /etc/fstab (Sept. 8, 2017, 8:11 a.m.)

UUID=e6a27fec-b822-4cc1-9f41-ca14655f938c /media/mohsen/4TB-Internal ext4 rw,user,exec 00

--------------------------------------------------------------------------------

To skip errors add "nobootwait":

/dev/sdb1 /mnt/ ext4 defaults,nobootwait 0 2
--------------------------------------------------------------------------------

File System Types:

auto

vfat - used for FAT partitions

ntfs, ntfs-3g - used for ntfs partitions

ext4, ext3, ext2, jfs, reiserfs, etc

udf,iso9660 - for CD/DVD

--------------------------------------------------------------------------------

Common options :

sync/async - All I/O to the file system should be done (a)synchronously.

auto - The filesystem can be mounted automatically (at boot-up, or when mount is passed the -a option). This is really unnecessary as this is the default action of mount -a anyway.

noauto - The filesystem will NOT be automatically mounted at startup, or when mount passed -a. You must explicitly mount the filesystem.

dev/nodev - Interpret/Do not interpret character or block special devices on the file system.

exec / noexec - Permit/Prevent the execution of binaries from the filesystem.

suid/nosuid - Permit/Block the operation of suid, and sgid bits.

ro - Mount read-only.

rw - Mount read-write.

user - Permit any user to mount the filesystem. This automatically implies noexec, nosuid,nodev unless overridden.

nouser - Only permit root to mount the filesystem. This is also a default setting.

defaults - Use default settings. Equivalent to rw, suid, dev, exec, auto, nouser, async.

_netdev - this is a network device, mount it after bringing up the network. Only valid with fstype nfs.

--------------------------------------------------------------------------------

+Crontab (July 11, 2017, 12:55 a.m.)

The crontab (cron derives from chronos, Greek for time; tab stands for table).

----------------------------------------------

To see what crontabs are currently running on your system:

sudo crontab -l
crontab -u username -l

----------------------------------------------

To edit the list of cronjobs::
sudo crontab -e

----------------------------------------------

To remove or erase all crontab jobs:
crontab -r

----------------------------------------------

Running GUI Applications:
0 1 * * * env DISPLAY=:0.0 transmission-gtk

Replace :0.0 with your actual DISPLAY.
Use "echo $DISPLAY" to find the display.

----------------------------------------------

Cronjobs are written in the following format:

* * * * * /bin/execute/this/script.sh

As you can see there are 5 stars. The stars represent different date parts in the following order:

minute (from 0 to 59)
hour (from 0 to 23)
day of month (from 1 to 31)
month (from 1 to 12)
day of week (from 0 to 6) (0=Sunday)

----------------------------------------------

Execute every minute:

* * * * * /bin/execute/this/script.sh

This means execute /bin/execute/this/script.sh:

every minute
of every hour
of every day of the month
of every month
and every day in the week.

----------------------------------------------

Execute every Friday 1 AM

0 1 * * 5 /bin/execute/this/script.sh

----------------------------------------------

Execute on workdays 1AM

0 1 * * 1-5 /bin/execute/this/script.sh

----------------------------------------------

Execute 10 past after every hour on the 1st of every month

10 * 1 * * /bin/execute/this/script.sh

----------------------------------------------

Run every 10 minutes:

0,10,20,30,40,50 * * * * /bin/execute/this/script.sh

But crontab allows you to do this as well:

*/10 * * * * /bin/execute/this/script.sh

----------------------------------------------

Special words:

For the first (minute) field, you can also put in a keyword instead of a number:

@reboot Run once, at startup
@yearly Run once a year "0 0 1 1 *"
@annually (same as @yearly)
@monthly Run once a month "0 0 1 * *"
@weekly Run once a week "0 0 * * 0"
@daily Run once a day "0 0 * * *"
@midnight (same as @daily)
@hourly Run once an hour "0 * * * *"

Leaving the rest of the fields empty, this would be valid:

@daily /bin/execute/this/script.sh

----------------------------------------------

List of the English abbreviated day of the week, which can be used in place of numbers:

0 -> Sun

1 -> Mon
2 -> Tue
3 -> Wed
4 -> Thu
5 -> Fri
6 -> Sat

7 -> Sun

Having two numbers for Sunday (0 and 7) can be useful for writing weekday ranges starting with 0 or ending with 7.

Examples of Number or Abbreviation Use

The next four examples will do all the same and execute a command every Friday, Saturday, and Sunday at 9.15 o'clock:

15 09 * * 5,6,0 command
15 09 * * 5,6,7 command
15 09 * * 5-7 command
15 09 * * Fri,Sat,Sun command

----------------------------------------------

Getting output from a cron job on the terminal:
You can redirect the output of your program to the pts file of an already existing terminal!
To know the pts file just type tty command
tty
And then add it to the end of your cron task:
38 23 * * * /home/mohsen/Programs/downloader.sh >> /dev/pts/4

----------------------------------------------

Cron jobs get logged to:
/var/log/syslog

You can see just cron jobs in that logfile by running:
grep CRON /var/log/syslog

OR

tail -f /var/log/syslog | grep CRON

----------------------------------------------

Mailing the crontab output

By default, cron saves the output in the user's mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:

MAILTO="yourname@yourdomain.com"

Mailing the crontab output of just one cronjob.
If you'd rather receive only one cronjob's output in your mail, make sure this package is installed:

$ aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/script.sh 2>&1 | mail -s "Cronjob ouput" yourname@yourdomain.com

----------------------------------------------

Trashing the crontab output

Now that's easy:

*/10 * * * * /bin/execute/this/script.sh > /dev/null 2>&1

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.

----------------------------------------------

Many scripts are tested in a Bash environment with the PATH variable set. This way it's possible your scripts work in your shell, but when running from cron (where the PATH variable is different), the script cannot find referenced executables and fails.

It's not the job of the script to set PATH, it's the responsibility of the caller, so it can help to echo $PATH, and put PATH=<the result> at the top of your cron files (right below MAILTO).

----------------------------------------------

Applicable Examples:

0 * * * DISPLAY=:0 /home/mohsen/Programs/transmission-startup.sh
0 11 * * * /home/mohsen/Programs/transmission-shutdown.sh

Do not forget to chomd +x both the following files.

-----------

transmission-startup.sh:
#! /bin/bash

/usr/bin/transmission-gtk > /dev/null &
echo $! > /tmp/transmission.pid
exit

-----------

transmission-shutdown.sh:
#! /bin/bash

if [ -f /tmp/transmission.pid ]
then
/bin/kill $(cat /tmp/transmission.pid)
fi

----------------------------------------------

How do I use operators?

An operator allows you to specify multiple values in a field. There are three operators:

The asterisk (*): This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.

The comma (,) : This operator specifies a list of values, for example: “1,5,10,15,20, 25”.

The dash (-): This operator specifies a range of values, for example, “5-15” days, which is equivalent to typing “5,6,7,8,9,….,13,14,15” using the comma operator.

The separator (/): This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2.

----------------------------------------------

+fdisk (July 8, 2017, 5:03 p.m.)

Merge Partitions:

1- fdisk /dev/sda


2- p
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 6293503 6291456 3G 83 Linux
/dev/sda2 6295550 10483711 4188162 2G 5 Extended


3- Delete both partitions you are going to merge:
d
Partition number (1,2, default 2): 2
Partition 2 has been deleted.

Command (m for help): d
Partition number (1-4): 1


4- n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 2): 1
First sector (63-1953520064, default: 63): (Choose the default value)
Last sector, +sectors... (Choose the default value)


5- t
Partition number (1-4): 1
Hex code (type L to list codes): 83


6- Make sure you've got what you're expecting:
Command (m for help): p


7- Finally, save it:
Command (m for help): w


8- resize2fs /dev/sda1
Reboot the system, then check if the partitions have been merged by:
fdisk -l

+Removing Swap Space (July 8, 2017, 2:52 p.m.)

1- swapoff /dev/sda5

2- Remove its entry from /etc/fstab

3- Remove the partition using parted:
apt-get install parted
parted /dev/sda
Type "print" to view the existing partitions and determine the minor number of the swap partition you wish to delete.
rm 5 (5 is the NUMBER of the partition.
Type "quit" to exit parted.

Done!

Now you need to merge the unused partition space with another partition. You can do it using the "fdisk" note.

+NFS (July 1, 2017, 10:19 a.m.)

NFS (Network File Share)

NFS is a network-based file system that allows computers to access files across a computer network.

------------------------------------------------------------------------

Server Setup:

1- Installation:
apt install nfs-kernel-server



2- Server Configuration:
In order to expose a directory over NFS, open the file /etc/exports and attach the following line at the bottom:
/home/mohsen/Audio 10.10.0.32(ro,async,no_subtree_check)

This IP is the client which is going to have access to the shared folder. You can also use the IP range.

service nfs-kernel-server restart

------------------------------------------------------------------------

Client Setup

1- Client Installation:
apt install nfs-common

2- Create a directory named "Audio" and:
mount 10.10.0.192:/home/mohsen/Audio /mnt/Audio/

By running df -h, you can ensure that your operation was successful.


3- Make it permanent:
vim /etc/fstab

10.10.0.192:/home/mohsen/Audio /mnt/Audio/ nfs defaults 0 0

------------------------------------------------------------------------

For MacOS use this command:
sudo mount -o resvport 10.10.0.192:/home/mohsen/Audio /mnt/Audio/

------------------------------------------------------------------------

+Trim & Merge MP3 files (June 25, 2017, 2:11 p.m.)

sudo apt-get install sox libsox-fmt-mp3

--------------------------------------------------------------------------------

Trim:

sox infile outfile trim 0 1:06
sox infile outfile trim 1:52 =2:40

--------------------------------------------------------------------------------

Merge:

sox first.mp3 second.mp3 third.mp3 result.mp3

--------------------------------------------------------------------------------

Merge two audio files with a pad:

sox short.ogg -p pad 6 0 | sox - -m long.ogg output.ogg

--------------------------------------------------------------------------------

+Fix Wireless Headphone Problem (June 10, 2017, 5:34 p.m.)

https://gist.github.com/pylover/d68be364adac5f946887b85e6ed6e7ae

+Convert deb to iso (May 14, 2017, 3:37 p.m.)

mkisofs firmware-bnx2_0.43_all.deb > iso

+Samba - Active Directory Infrastructure (May 7, 2017, 10:31 a.m.)

1- sudo apt-get install samba krb5-user krb5-config winbind libpam-winbind libnss-winbind


2- While the installation is running a series of questions will be asked by the installer in order to configure the domain controller.
First, DESKBIT.LOCAL
Second, deskbit.local
Third, deskbit.local


3- Provision Samba AD DC for Your Domain:
systemctl stop samba-ad-dc.service smbd.service nmbd.service winbind.service
systemctl disable samba-ad-dc.service smbd.service nmbd.service winbind.service


4- Rename or remove samba original configuration. This step is absolutely required before provisioning Samba AD because at the provision time Samba will create a new configuration file from scratch and will throw up some errors in case it finds an old smb.conf file.
sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.initial


5- Start the domain provisioning interactively:
samba-tool domain provision --use-rfc2307 --interactive
(Leave everything as default and set a desired password.)
Here is the last result after the process gets finished:
Server Role: active directory domain controller
Hostname: samba
NetBIOS Domain: DESKBIT
DNS Domain: deskbit.local
DOMAIN SID: S-1-5-21-163349405-2119569559-686966403


6- Rename or remove Kerberos main configuration file from /etc directory and replace it using a symlink with Samba newly generated Kerberos file located in /var/lib/samba/private path:
mv /etc/krb5.conf /etc/krb5.conf.initial
ln -s /var/lib/samba/private/krb5.conf /etc/


7- Start and enable Samba Active Directory Domain Controller daemons:
systemctl start samba-ad-dc.service
systemctl status samba-ad-dc.service (You may get some error logs, like (Cannot contact any KDC for requested realm), which is okay.
systemctl enable samba-ad-dc.service


8- Use netstat command in order to verify the list of all services required by an Active Directory to run properly.
netstat –tulpn| egrep 'smbd|samba'


9- At this moment Samba should be fully operational at your premises. The highest domain level Samba is emulating should be Windows AD DC 2008 R2.
It can be verified with the help of samba-tool utility.
samba-tool domain level show


10- In order for DNS resolution to work locally, you need to open end edit network interface settings and point the DNS resolution by modifying dns-nameservers statement to the IP Address of your Domain Controller (use 127.0.0.1 for local DNS resolution) and dns-search statement to point to your realm.
When finished, reboot your server and take a look at your resolver file to make sure it points back to the right DNS name servers.


11- Test the DNS resolver by issuing queries and pings against some AD DC crucial records, as in the below excerpt. Replace the domain name accordingly.
ping -c3 deskbit.local # Domain Name
ping -c3 samba.deskbit.local # FQDN
ping -c3 samba # Host

+Date and Time (May 3, 2017, 1:42 p.m.)

Display Current Date and Time:
$ date

----------------------------------------------------

Display The Hardware Clock (RTC):

# hwclock -r

OR show it in Coordinated Universal time (UTC):
# hwclock --show --utc

----------------------------------------------------

Set Date Command Example:
date -s "2 OCT 2006 18:00:00"

OR
date --set="2 OCT 2006 18:00:00"

----------------------------------------------------

Set Time Examples:

date +%T -s "10:13:13"

Use %p locale’s equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"

----------------------------------------------------

How do I set the Hardware Clock to the current System Time?

Use the following syntax:
# hwclock --systohc

OR
# hwclock -w

----------------------------------------------------

A note about systemd based Linux system

With systemd based system you need to use the timedatectl command to set or view the current date and time. Most modern distro such as RHEL/CentOS v.7.x+, Fedora Linux, Debian, Ubuntu, Arch Linux and other systemd based system need to the timedatectl utility. Please note that the above command should work on modern system too.

----------------------------------------------------

timedatectl: Display the current date and time:

$ timedatectl

----------------------------------------------------

Change the current date using the timedatectl command:
# timedatectl set-time YYYY-MM-DD

OR
$ sudo timedatectl set-time YYYY-MM-DD

For example set the current date to 2015-12-01 (1st, Dec, 2015):
# timedatectl set-time '2015-12-01'
# timedatectl

----------------------------------------------------

To change both the date and time, use the following syntax:
# timedatectl set-time '2015-11-23 08:10:40'
# date

----------------------------------------------------

To set the current time only:

The syntax is:
# timedatectl set-time HH:MM:SS
# timedatectl set-time '10:42:43'
# date

----------------------------------------------------

Set the time zone using timedatectl command:

To see the list of all available time zones, enter:
$ timedatectl list-timezones
$ timedatectl list-timezones | more
$ timedatectl list-timezones | grep -i asia
$ timedatectl list-timezones | grep America/New

To set the time zone to ‘Asia/Kolkata’, enter:
# timedatectl set-timezone 'Asia/Kolkata'

Verify it:
# timedatectl

----------------------------------------------------

How to synchronizing the system clock with a remote server using NTP?

# timedatectl set-ntp yes

Verify it:
$ timedatectl

----------------------------------------------------

For changing the timezone:
dpkg-reconfigure tzdata

----------------------------------------------------

+Extract ISO files (April 26, 2017, 12:28 p.m.)

sudo mount -o loop an_iso_file.iso /home/mohsen/Temp/foo/

+reprepro (March 4, 2017, 11:46 a.m.)

https://www.howtoforge.com/setting-up-an-apt-repository-with-reprepro-and-nginx-on-debian-wheezy
-------------------------------------------------------------------------
1-Install GnuPG and generate a GPG key for Signing Packages:
apt-get install gnupg dpkg-sig rng-tools
-------------------------------------------------------------------------
2-Open /etc/default/rng-tools:
vim /etc/default/rng-tools

and make sure you have the following line in it:
[...]
HRNGDEVICE=/dev/urandom
[...]

Then start rng-tools:
/etc/init.d/rng-tools start
-------------------------------------------------------------------------
3-Generate your key:
gpg --gen-key
-------------------------------------------------------------------------
4-Install and configure reprepro:
apt-get install reprepro

Let's use the directory /var/www/repo as the root directory for our repository. Create the directory /var/www/repo/conf:
mkdir -p /var/www/repo/conf
-------------------------------------------------------------------------
5-Let's find out about the key we have created in step 3:
gpg --list-keys

Our public key is D753ED90. We have to use this from now on.
-------------------------------------------------------------------------
6-Create the file /var/www/repo/conf/distributions as follows:
vim /var/www/repo/conf/distributions
-------------------------------------------------------------------------
7-The address of our apt repository will be apt.example.com, so we use this in the Origin and Label lines. In the SignWith line, we add our public key (D753ED90). Drop out the "2048R/" part:

Origin: reprepro.deskbit.local
Label: reprepro.deskbit.local
Codename: stable
Architectures: amd64
Components: main
Description: Deskbit Proprietary Softwares
SignWith: D753ED90
-------------------------------------------------------------------------
8-Create the (empty) file /var/www/repo/conf/override.stable:
touch /var/www/repo/conf/override.stable
-------------------------------------------------------------------------
9-Then create the file /var/www/repo/conf/options with this content:
verbose
ask-passphrase
basedir /var/www/repo
-------------------------------------------------------------------------
10-To sign our deb packages with our public key, we need the package dpkg-sig:
dpkg-sig -k D753ED90 --sign builder /usr/src/my-packages/*.deb
-------------------------------------------------------------------------
11-Now we import the deb packages into our apt repository:
cd /var/www/repo
reprepro includedeb stable /usr/src/my-packages/*.deb
-------------------------------------------------------------------------
12-Configuring nginx:
We need a webserver to serve our apt repository. In this example, I'm using an nginx webserver.

server {
listen 80;
server_name apt.example.com;

access_log /var/log/nginx/packages-error.log;
error_log /var/log/nginx/packages-error.log;

location / {
root /var/packages;
index index.html;
autoindex on;
}

location ~ /(.*)/conf {
deny all;
}

location ~ /(.*)/db {
deny all;
}
}
***************************************************************************
OR for Apache:

<VirtualHost *:80>
ServerName reprepro.deskbit.local
DocumentRoot /var/www/repo
ServerName reprepro.deskbit.local
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
-------------------------------------------------------------------------
13-Let's create a GPG key for the repository:
gpg --armor --output /var/www/repo/repo.deskbit.io.gpg.key --export C7C1365D
-------------------------------------------------------------------------
14-To use the repository, place the following line in your /etc/apt/sources.list:
vim /etc/apt/sources.list

[...]
deb http://repo.deskbit.io/ stable main
[...]
-------------------------------------------------------------------------
15-If you want this repository to always have precedence over other repositories, you should have this line right at the beginning of your /etc/apt/sources.list and add the following entry to /etc/apt/preferences:

vim /etc/apt/preferences:

Package: *
Pin: origin apt.example.com
Pin-Priority: 1001
-------------------------------------------------------------------------
16-Before we can use the repository, we must import its key:
wget -O - -q http://repo.deskbit.io/repo.deskbit.io.gpg.key | apt-key add -

apt-get update
-------------------------------------------------------------------------

+Packages to Install (Feb. 24, 2017, 10:15 a.m.)

pavucontrol proxychains android-tools-adb android-tools-fastboot gimp-plugin-registry gimp gir1.2-keybinder-3.0 quodlibet python3-dev python-dev libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev zip python-setuptools vim postgresql-server-dev-all postgresql libpq-dev curl geany python-pip tmux git virtaal gdebi-core gdebi smplayer yakuake vlc gparted krita transmission-gtk htop graphicsmagick-imagemagick-compat network-manager-l2tp python3-pip kaffeine pptp-linux network-manager-pptp aria2 kazam

------------------------------------------------------------------

pip3 install pipenv

------------------------------------------------------------------

Xtreme Download Manager:

wget https://sourceforge.net/projects/xdman/files/latest/download -O xdman.deb

------------------------------------------------------------------

+Faster grep (Jan. 7, 2017, 4:59 p.m.)

1- Install `parallel`
sudo apt-get install parallel

2- Begin search:
find . -type f | parallel -k -j150% -n 1000 -m grep -H -n "keyring doesn\'t exist" {}

+Write ISO file to DVD in terminal (Sept. 3, 2016, 9:13 p.m.)

Using this command, check where the DVD Writer is mounted: (/dev/sr0)
inxi -d

And using this command, start writing on the DVD:
wodim -eject -tao speed=8 dev=/dev/sr0 -v -data Downloads/linuxmint-18-kde-64bit-beta.iso

+See Linux Version (Aug. 15, 2016, 3:26 p.m.)

cat /etc/os-release

cat /etc/*release

uname -a

lsb_release -a

+PyCharm / IntelliJ IDEA allows only two spaces (July 26, 2016, 12:37 p.m.)

In settings search for `EditorConfig` and disable the plugin.

+Enable/Disalbe Bluetooth (July 26, 2016, 10:42 a.m.)

sudo rfkill block bluetooth
sudo update-rc.d bluetooth disable
service bluetooth status

--------------------------------------------------------------------

sudo rfkill unblock bluetooth
sudo update-rc.d bluetooth enable
service bluetooth status

--------------------------------------------------------------------

+Error: Fixing recursive fault but reboot is needed! (July 17, 2016, 9:49 a.m.)

sudo nano /etc/default/grub

Change:
GRUB_CMDLINE_LINUX_DEFAULT
GRUB_CMDLINE_LINUX

To:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX="acpi=off"

sudo update-grub2

+No partitions found while installing Linux (July 15, 2016, 9:28 p.m.)

1- Boot up linux with Live CD (the installation disk)
2- sudo su
3- sudo apt-get install gdisk
4- sudo gdisk /dev/sda
5- Select (1) for MBR
6- Type x for expert stuff
7- Type z to zap the GPT data
8- Type y to proceed destroying GPT data
9- Type n in order to not lose MBR data

Now restart the installation procedure.

+Remove invalid characters from filenames (May 29, 2016, 8:18 a.m.)

find . -exec rename 's/[^\x00-\x7F]//g' "{}" \;

+PyCharm Regex (May 23, 2016, 2:07 a.m.)

https://www.jetbrains.com/help/pycharm/2016.1/regular-expression-syntax-reference.html

{8}"name_ru": ".+?",\n

-------------------------------------------------------------------

Search for any occurrences starting with a double quote:
.?"

-------------------------------------------------------------------

+SASL authentication for IRC network using freenode (April 14, 2016, 7:36 p.m.)

https://userbase.kde.org/Konversation/Configuring_SASL_authentication

chat.freenode.net
port: 6697
Make sure to use "Secure Connectsion (SSL)"

+Batch rename files (March 11, 2016, 10:53 a.m.)

for file in *.html
do
mv "$file" "${file%.html}.txt"
done

---------------------------------------------------------------

for file in *
do mv "$file" "$file.mp3"
done

---------------------------------------------------------------

Remove the word "crop_" in all files:

for file in *; do mv "$file" "${file/crop_/}"; done

---------------------------------------------------------------

+Genymotion (April 10, 2016, 7:22 p.m.)

1-apt-get install libdouble-conversion1

2-Download `Ubuntu 14.10 and older, Debian 8` genymotion version from the following link:
https://www.genymotion.com/download/
The downloaded file name should be `genymotion-2.8.0-linux_x64.bin`.

3-sudo bash ./genymotion-2.8.0-linux_x64.bin

4-For running it, use this command:
/opt/genymobile/genymotion/genymotion

5-You should already have the genymotion VirtualBox (ovd) files. If so, you need to change the path of VirtualBox Virtual devices in settings, to the location of your files.
Settings --> Virtualbox (tab) --> Browse

Hint:
After this step I still could not see the list of virtual devices in genymotion program. I imported the ovd files in virtualbox program, and they got displayed in genymotion too.

+ADB (Nov. 2, 2015, 5:04 p.m.)

sudo apt-get install android-tools-adb android-tools-fastboot

+Gimp Plugin (Nov. 2, 2015, 5:03 p.m.)

sudo apt-get install gimp-plugin-registry

+Diff over SSH (Oct. 12, 2015, 10:40 a.m.)

diff /home/mohsen/Projects/Shetab/nespresso/nespresso/urls.py <(ssh shetab@buynespresso.ir 'cat /home/shetab/websites/nespresso/nespresso/urls.py')

+Trim/Cut video files (Sept. 14, 2015, 2:03 p.m.)

ffmpeg -i video.mp4 -ss 10 -t 10 -c copy cut2.mp4

The first 10 is the start time in seconds:
10 ==> 10 seconds from start
1:10 ==> One minute and 10 seconds
1:10:10 ==> One hour, one minute and ten seconds


The second 10 is the duration.

+Retrieve Video File Information (Sept. 14, 2015, 12:02 p.m.)

mplayer -vo null -ao null -frames 0 -identify test.mp4

+Change Hostname (Aug. 6, 2015, 11:14 p.m.)

nano /etc/hostname
/etc/init.d/hostname.sh start

nano /etc/hosts
service hostname restart

+Get public IP address and email it (July 25, 2015, 1:17 p.m.)

Getting public IP address in bash:

wget -qO- ifconfig.me/ip
OR
curl ifconfig.me/ip

------------------------------------------------------------

Getting it and emailing it (copy this script and paste it in a file with `.sh` extension):
#/bin/sh
IPADDRESS=$(wget -qO- ifconfig.me/ip)
# IPADDRESS=$(curl ifconfig.me)
if [[ "${IPADDRESS}" != $(cat ~/.current_ip) ]]
then
echo "Your new IP address is ${IPADDRESS}" |
mail -s "IP address change" mohsen@mohsenhassani.com
echo ${IPADDRESS} >|~/.current_ip
fi

------------------------------------------------------------

+Libreoffice - Add/Remove RTL and LTR buttons to formating toolbar to Libreoffice (July 8, 2015, 7:41 p.m.)

You have to enable Complex Text Layout (CTL) support:

1- Tools → Options → Language Settings → Languages

2- Enable `Complex Text Layout (CTL)`

3- Select Persian

4- Restart libreoffice.

+Installing Irancell 3G-4G Modem Driver (July 8, 2015, 10:53 a.m.)

1- sudo apt-get install g++-multilib libusb-dev libusb-0.1-4:i386

2- Connect the modem and copy the `linuxdrivers.tar.gz` file to your computer, extract it and cd to the directory.

3- CD to directory `drivers` and using the `install_driver` file, install the driver:
sudo ./install_driver

4- Create a shortcut from the file `lcdofshift.sh` to make the connection procedure easier:
ln -s /home/mohsen/Programs/linuxdrivers/drivers/lcdoshift.sh .

5- To establish a connection use the command:
sudo ~/lcdoshift.sh

--------------------------------------------------------------------------

And this is the output:

Looking for default devices ...
Found default devices (1)
Accessing device 007 on bus 003 ...

USB description data (for identification)

-------------------------

Manufacturer: Longcheer
Product: LH9207
Serial No.:

-------------------------

Looking for active driver ...
No driver found. Either detached before or never attached
Setting up communication with interface 0 ...
Trying to send the message to endpoint 0x01 ...
OK, message successfully sent
-> Run lsusb to note any changes. Bye.

sleep 3
ifconfig ecm0 up
dhclient ecm0
mohsen drivers #

+Quodlibet Multimedia Keys (June 3, 2015, 9:12 p.m.)

apt-get install gir1.2-keybinder-3.0

+Connecting to wifi network through command line (June 3, 2015, 6:13 p.m.)

1- sudo iwlist wlan0 scan

2- sudo iwconfig wlan0 essid "THE SSID"

3- iwconfig wlan0 key s:password

4- sudo dhclient wlan0

+Root Password Recovery (May 27, 2015, 1:24 p.m.)

rw init=/bin/bash

+Locale Settings (Feb. 5, 2016, 1:40 a.m.)

This first solution has been worked. So before checking the other solutions, try this one first!

nano /etc/environment
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8

Restart server and it should be fixed now!

------------------------------------------------------------------------

locale-gen en_US.UTF-8

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
dpkg-reconfigure locales

------------------------------------------------------------------------

This is a common problem if you are connecting remotely, so the solution is to not forward your locale. Edit /etc/ssh/ssh_config and comment out SendEnv LANG LC_* line.

------------------------------------------------------------------------

+TV Card Driver (April 17, 2015, 7:08 p.m.)

http://nucblog.net/2014/11/installing-media-build-drivers-for-additional-tv-tuner-support-in-linux/

1- sudo apt-get install libproc-processtable-perl git libc6-dev

2- git clone git://linuxtv.org/media_build.git

3- cd media_build

4- $ ./build

5- sudo make install

6- apt-get install me-tv kaffeine

7- reboot for loading the driver (I don't know the driver for modprobe yet).

--------------------------------------------------------------------------------------------------

Scan channels using Kaffein:

1- Open Kaffein

2- From `Television` menu, choose `Configure Television`.

3- From `Device 1` tab, from `Source` option, choose `Autoscan`

4- From `Television` menu choose `Channels`

5- Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.

--------------------------------------------------------------------------------------------------

Scan channels using Me-TV

1- Open Me-TV

2- When the scan dialog opens, choose `Czech Republic` from `Auto scan`.

--------------------------------------------------------------------------------------------------

+Environment Variable (April 3, 2015, 8:46 p.m.)

www.cyberciti.biz/faq/set-environment-variable-linux/

---------------------------------------------------------------------------------------------

Commonly Used Shell Variables:

http://bash.cyberciti.biz/guide/Variables#Commonly_Used_Shell_Variables

---------------------------------------------------------------------------------------------

Use `set` command to display current environment

---------------------------------------------------------------------------------------------

The $PATH defines the search path for commands. It is a colon-separated list of directories in which the shell looks for commands.

---------------------------------------------------------------------------------------------

You can display the value of a variable using printf or echo command:
$ echo "$HOME"

---------------------------------------------------------------------------------------------

You can modify each environmental or system variable using the export command. Set the PATH environment variable to include the directory where you installed the bin directory with perl and shell scripts:

export PATH=${PATH}:/home/vivek/bin

OR

export PATH=${PATH}:${HOME}/bin

--------------------------------------------------------------------------------------------

You can set multiple paths as follows:

export ANT_HOME=/path/to/ant/dir
export PATH=${PATH}:${ANT_HOME}/bin:${JAVA_HOME}/bin

---------------------------------------------------------------------------------------------

How Do I Make All Settings permanent?

The ~/.bash_profile ($HOME/.bash_profile) or ~/.prfile file is executed when you login using console or remotely using ssh. Type the following command to edit ~/.bash_profile file, enter:

$ vi ~/.bash_proflle

Append the $PATH settings, enter:
export PATH=${PATH}:${HOME}/bin
Save and close the file.

---------------------------------------------------------------------------------------------------------

+Ubuntu Sources List Generator (March 18, 2015, 3:52 p.m.)

http://repogen.simplylinux.ch/

http://www.ubuntuupdates.org/ppa/mint_main

+Delete special files recursively (March 7, 2015, 2:36 p.m.)

find . -name "*.bak" -type f -delete

find . -name "*.bak" -type f

+How to stop services / programs from starting automatically (March 3, 2015, 11:27 a.m.)

update-rc.d -f apache2 remove

+Truetype Fonts (Arial Font) (Feb. 22, 2015, 1:10 p.m.)

http://www.cyberciti.biz/faq/howto-debian-install-use-ms-windows-truetype-fonts-under-xorg/
---------------------------------------------------------------------------------------------
apt-get install ttf-liberation

+Add Resolutions (Feb. 15, 2015, 11:19 a.m.)

1. Install arandr
apt install arandr


2. Run "arandr" from the applications menu.


3. Create a resolution by doing the following:
In this example, the resolution I want is 1920x1080
cvt 1920 1080

This will create a modeline like this:
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync

Create the new mode:
xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync


4. Add the mode (resolution) to the desired monitor: (Get the list of active outputs from the "output" menu in Arandr application)
xrandr --addmode VGA-1 "1920x1080_60.00"


5- For switching to the newly created resolution:
xrandr -s 1920x1080

OR

xrandr --output VGA-1 --mode "1920x1080"

OR

5. Run arandr and position your monitors correctly

6. Choose 'layout' then 'save as' to save the script

7. I found the best place to load the script (under Xubuntu) is the settings manager:
xfce4-settings-manager

OR

Menu -> Settings -> Settings Manager -> Session and Startup -> Application Autostart

+Dump traffic on a network (Feb. 7, 2015, 11:33 a.m.)

tcpdump -nti any port 4301

To connect to it:
telnet 5.32.34.54 4301

+Show open ports and listening services (Feb. 7, 2015, 10:33 a.m.)

netstat -an | egrep 'Proto|LISTEN'

netstat -lnptu

--------------------------------------------------------------

netstat -tulpn | grep 389

-------------------------------------------------------------------

+Make Bootable USB stick (Jan. 8, 2015, 7:50 p.m.)

sudo dd if=~/Desktop/linuxmint.iso of=/dev/sdx oflag=direct bs=4M status=progress

onflag=sync

----------------------------------------------------------------

This method works better for making Windows images:

Download "WoeUSB" from the following link and use the GUI application to create the USB disk.

http://ppa.launchpad.net/nilarimogard/webupd8/ubuntu/pool/main/w/woeusb/

----------------------------------------------------------------

+Change locale/timezone and set the clock (Sept. 20, 2015, 1:57 p.m.)

1- ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
2- apt install ntp
3- ntpd
4- hwclock -w

-------------------------------------------------------------------

Linux Set Date Command Example
# date -s "2 OCT 2006 18:00:00"

OR

# date --set="2 OCT 2006 18:00:00"

OR

# date +%Y%m%d -s "20081128"

OR

# date +%T -s "10:13:13"
Where,

10: Hour (hh)
13: Minute (mm)
13: Second (ss)

Use %p locale's equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"

-------------------------------------------------------------------

yum install ntp
ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
/etc/init.d/ntpd stop
ntpdate 0.pool.ntp.org

-------------------------------------------------------------------

+Split and Join/Merging Files (Nov. 28, 2014, 11:58 a.m.)

split --bytes=1M NimkatOnline-1.0.0.apk NimkatOnline
-l ==> lines

b ==> bytes
M ==> Megabyte
G ==> Gigabytes


split --bytes=1M images/myimage.jpg new

split -b 22 newfile.txt new
Split the file newfile.txt into three separate files called newaa, newab and newac..., with each file containing 22 bytes of data.

split -l 300 file.txt new
Split the file newfile.txt into files beginning with the name new, each containing 300 lines of text.

-------------------------------------------------------------

For merging or joining files:

cat new* > newimage.jpg

-------------------------------------------------------------

+Locate (Nov. 13, 2014, 10:03 p.m.)

Match the exact filename:

locate -b '\filename'

-------------------------------------------------------------

Don’t output all the results, but only the number of matching entries.

locate -c test

-------------------------------------------------------------

+SSH login without password (Nov. 13, 2014, 7:29 p.m.)

1- ssh-keygen -t rsa (No need to set a password)

2- ssh-copy-id mohsen@mohsenhassani.com

Now you can log in without a password

+APT - The location where apt-get caches/stores .deb files (Oct. 18, 2014, 6:16 a.m.)

/var/cache/apt/archives/

+Recover Files (Sept. 14, 2014, 7:24 p.m.)

Using this program you can undelete/recover deleted files:
testdisk

After selecting the desired Hard Disk, press capital (p) the `P` key to show all the deleted files.

+Setting Proxy Variable (Aug. 22, 2014, 12:44 p.m.)

export http_proxy="localhost:9000"
export https_proxy="localhost:9000"
export ftp_proxy="localhost:9000"

And for removing environment variables:
unset http_proxy
unset https_proxy
unset ftp_proxy

+Getting folder size (Aug. 22, 2014, 12:38 p.m.)

du -sh /path/to/directory

+Join *.001, *.002, .... files (Aug. 22, 2014, 12:33 p.m.)

cat filename.avi.* > filename.avi

+ISO files (Aug. 22, 2014, 12:33 p.m.)

Convert .DAA Files To .ISO

Download and install power PowerISO using the following link:
http://www.poweriso.com/download.php
Scroll to the bottom of the page, in `Other downloads` section to get the linux version.

1- wget http://www.poweriso.com/poweriso-1.3.tar.gz

2- tar -zxvf poweriso-1.3.tar.gz

3- You can copy the extracted file “poweriso” to /usr/bin to help all users of a computer to use it.
Now if you want to convert for example a .daa file to .iso use this command:
poweriso convert /path/to/source.daa -o /path/to/target.iso -ot iso

There are more useful commands of poweriso:
Task: list all files and directories in home direcory of /media/file.iso

poweriso list /media/file.iso /
poweriso list /media/file.iso / -r
*******
Fore more commands please type
poweriso -?

--------------------------------------------------------------------------------------

Convert DMG to ISO

1- Install the tool
sudo apt-get install dmg2img

2- The following command will convert the .dmg to .img file in ISO format:
dmg2img <file_name>.dmg

3- And finally, rename the extension:
mv <file_name>.img <file_name>.iso

--------------------------------------------------------------------------------------

Create ISO file from a directory:
mkisofs -allow-limited-size -o abcd.iso abcd

--------------------------------------------------------------------------------------

+Nautilus Bookmarks (Aug. 22, 2014, 12:26 p.m.)

Nautilus bookmarks configuration file location:
~/.config/gtk-3.0/bookmarks

For seeing which version of nautilus you have:
nautilus --version

+Convert mp3 to ogg (Aug. 22, 2014, 12:32 p.m.)

1- apt-get install mpg321 vorbis-tools

2- mpg321 input.mp3 -w raw && oggenc raw -o output.ogg

+Convert rmp to deb (Aug. 22, 2014, 12:26 p.m.)

1- apt-get install alien

2- alien -d package-name.rpm

+Tmux (Aug. 22, 2014, 12:31 p.m.)

Prompt not following normal bash colors:

For fixing the problem, create a file `~/.tmux.conf` if it does not exist, and add the following to it:
set -g default-terminal "screen-256color"

set -g history-limit 100000

--------------------------------------------------------------------------

Tmux Plugin Manager:

git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm

Put this at the bottom of ~/.tmux.conf:

# List of plugins
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

# Initialize TMUX plugin manager (keep this line at the very bottom of tmux.conf)
run '~/.tmux/plugins/tpm/tpm'

--------------------------------------------------------------------------

Installing plugins:

1-Add new plugin to ~/.tmux.conf with set -g @plugin '...'
2-Press prefix + I (capital I, as in Install) to fetch the plugin.

--------------------------------------------------------------------------

Uninstalling plugins:

1-Remove (or comment out) plugin from the list.
2-Press prefix + alt + u (lowercase u as in uninstall) to remove the plugin.

--------------------------------------------------------------------------

Tmux-continuum plugin:

set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'

Automatic restore:
Last saved environment is automatically restored when tmux is started.
Put this in tmux.conf to enable:
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'

--------------------------------------------------------------------------

CPU/RAM/battery stats chart bar:

install the plugin using CPAN:
sudo cpan -i App::rainbarf

If it's the first time you're using CPAN you might be asked to let some plugins get installed automatically...
You choose (yes) and then choose(sudo) to let the plugin installed.

After installation, create a config file ~/.rainbarf.conf with this content:

width=20 # widget width
bolt # fancy charging character
remaining # display remaining battery
rgb # 256-colored palette

--------------------------------------------------------------------------

Whole config file:

set -g default-terminal "screen-256color"
set-option -g status-utf8 on

set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @plugin 'tmux-plugins/tmux-logging'
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'

set -g history-limit 500000

set -g status-right '#(rainbarf)'
set -g default-command bash

run '~/.tmux/plugins/tpm/tpm'

--------------------------------------------------------------------------

PRESS CTRL+B and CTRL+I to install plugins after editing the .tmux.conf file.

--------------------------------------------------------------------------

CTRL + B and SHIFT + P to start (and end) logging in current pane.
CTRL + B and ALT + P to start (and end) to capture screen.

Save complete history:
CTRL + B and ALT + SHIFT + P

Clear pane history:
CTRL + B and ALT + C

--------------------------------------------------------------------------

Swap Window:
swap-window -s 3 -t 1

--------------------------------------------------------------------------

Copy paste in Tmux:

1- Enter copy mode using Control+b [
2- Navigate to beginning of text, you want to select and hit Control+Space.
3- Move around using arrow keys to select region.
4- When you reach end of region simply hit Alt+w to copy the region.
5- Now Control+b ] will paste the selection.

--------------------------------------------------------------------------

+Undeleteing (Aug. 22, 2014, 12:30 p.m.)

1- Install extundelete: apt-get install extundelete

2- Either "unmount" or "remount" the partition as read-only:
sudo mount -t vfat -O remount,ro /dev/sdb /mnt

To remount it back to read-write: (This task is not part of this tutorial. It's just for keeping a note.)
sudo mount -t vfat -O remount,rw /dev/sdb /mnt

3- For restoring the files from the whole partition:
extundelete /dev/sdb1 –restore-all
And for restoring important files quickly, you may use the --restore-file, --restore-files, or --restore-directory options.

+Error - ia32-libs : Depends: ia32-libs-i386 but it is not installable (Aug. 22, 2014, 12:29 p.m.)

The ia32-libs-i386 package is only installable from the i386 repository, which becomes available with the following commands:

dpkg --add-architecture i386
apt-get update

+Driver - Samsung Printer (July 20, 2015, 11:23 p.m.)

https://doc.ubuntu-fr.org/tutoriel/installer_imprimante_samsung

Installing My Samsung Printer Driver (SCX-4521F):

1-Add the following repository to /etc/apt/sources.list:
deb http://www.bchemnet.com/suldr/ debian extra

2-Install the GPG key:
sudo apt-get install suldr-keyring
apt-get update

3-Install these packages:
apt-get install samsungmfp-driver-4.00.39 suld-configurator-2-qt4

+Grub rescue (Aug. 22, 2014, 12:02 p.m.)

I haven't tried it yet, so keep in mind to correct the problems:
mount /dev/masax /mnt
groub-install --root-directory=/mnt/ /dev/sda

OR

Another day I just used these commands, some would give me errors, but some would work...but in my surprise it worked:
set prefix=(hd0,1)/boot/grub
insmod (hd0,1)/boot/grub/linux.mod
insmod part_msdos
insmod ext2
set root=(hd0,1)
reboot using CTRL+ALT+DELETE

+Commands - iftop (Aug. 22, 2014, 12:23 p.m.)

iftop: InterFace Table of Processes

Install iftop for viewing what applications are using/eating up Internet.

iftop -i eth1

# The logs from xchat help:
in iftop hit `p` to toggle port display
now you know which port on your machine is connecting out to that domain
now use netstat -nlp to list all pids on which ports are connecting out
you should now know which pid is hitting that domain... provided all traffic originates on your local box
also consider using lsof for this sort of mining

+unrar (Aug. 22, 2014, 12:03 p.m.)

Recurse subdirectories:

unrar x -r <parent directory

---------------------------------------------------------------------

unrar e file.rar

unrar l file.rar

---------------------------------------------------------------------

Unrar all files:

for file in *.part01.rar; do unrar x ${file}; done;

---------------------------------------------------------------------

+Swap file (Aug. 22, 2014, 12:02 p.m.)

Create a swap file:

1- dd if=/dev/zero of=/swapfile1 bs=1024 count=524288

Where,
if=/dev/zero : Read from /dev/zero file. /dev/zero is a special file in that provides as many null characters to build storage file called /swapfile1.
of=/swapfile1 : Read from /dev/zero write stoage file to /swapfile1.
bs=1024 : Read and write 1024 BYTES bytes at a time.
count=524288 : Copy only 523288 BLOCKS input blocks.


2- mkswap /swapfile1


3- chown root:root /swapfile1
chmod 0600 /swapfile1


4- swapon /swapfile1


5- nano /etc/fstab
Append the following line:
/swapfile1 swap swap defaults 0 0


6- To test/see the free space:
free -m

+Aliases (Aug. 22, 2014, noon)

Defining alias:

1- Open the file ~/.bashrc and write an alias like this:
alias myvps='ssh -p 54321 mohsen@mohsenhassani.com'


2- Enter this command to make the changes affect:
source .bashrc


Keep in mind that every time a change is done to .bashrc file, you have to reload it with:
source .bashrc

+Backlight (Screen Brightness) (Aug. 22, 2014, 11:32 a.m.)

For solving the back light brightness problem, got to /etc/default/grub and edit the line: GRUB_CMDLINE_LINUX_DEFAULT to:

GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_osi=Linux acpi_backlight=vendor splash"

And then:
update-grub2

----------------------------------------------------------------------------

Check if graphics card is intel:
ls /sys/class/backlight

You should see something like:
ideapad intel_backlight

----------------------------------------------------------------------------

Fix backlight:
Create this file: /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
Driver "intel"
Option "Backlight" "intel_backlight"
Identifier "card0"
EndSection

Logout and Login. Done.

----------------------------------------------------------------------------

+IRC (Aug. 22, 2014, 11:28 a.m.)

1- Join the Freenode network. Open your favorite IRC client and type:
/server irc.freenode.net


2- Choose a user name or nick. This user name should consist only of the letters from A-Z, the numbers from 0-9, and certain symbols such as "_" and "-". It may have a maximum of 16 characters.


3- Change your user name to the user name you have chosen. Suppose you chose the nickname "awesomenickname". Type the following in the window titled Freenode:
/nick awesomenickname


4- Register your nick or user name. Type the following command and replace "your_password" with a password that will be easy to remember, and replace "your_email_address" with your email address.
/msg nickserv register your_password your_email_address


5- Verify your registration. After you register, you will not be able to identify to NickServ until you have verified your registration. To do this, check your email for an account verification code.


6- Group an alternate nickname with your main one. If you would like to register an alternate nickname, first switch to the alternate nickname that you want while you are identified as the main one, then group your nicks together with this command:
/msg nickserv group


7- Identify with Nickserv. Each time you connect, you should sign in, or "identify" yourself, using the following command:
/msg nickserv identify your_password


You can send private messages anytime after step 4. The advantage of the other steps is to make your registration much more secure. To send a private message, you simply do the following, replacing Nick with the nick or user name of the person you wish to contact privately and message with the message you want to start with:
/msg Nick message

Take care to follow this process in the Freenode window, not directly in a channel. If you type all the commands correctly, nothing should be visible to others, but it's very easy to type something else by mistake, and in so doing, you could expose your password.

Choose a nick between 5 and 8 characters long. This will make it easier to identify and avoid confusion. Choose your nick wisely. Remember that users will identify this name with your person.

User names will automatically expire after 60 days of disuse. This is counted from the last time it was identified with NickServ. If the nickname you want is not in use and you want it, you can contact somebody with Freenode staff to unassign it for you. If you will not be able to use IRC for 60 days you can extend the time using the vacation command (/msg nickserv vacation). Vacation will be disabled automatically next time you identify to NickServ.

To check when a nick was last identified with NickServ, use /msg NickServ info Nick

The Freenode staff have an option enabled to receive private messages from unregistered users so if you wish to request that a nick be freed, you do not have to register another.
To contact a member of the staff, use the command /stats p or /quote stats p if the first doesn't work. Send them a private message using /query nick.
In case there is no available staff member in /stats p, use /who freenode/staff/* or join the channel #freenode using /join #freenode.

Avoid using user names that are brand names or famous people, to avoid conflicts.

If you don't want your IP to be seen to the public, contact FreeNode staff and they can give you a generic "unaffiliated" user cloak, if you are not a member of a project.

If you want to hide your email address, use /msg nickserv set hidemail on.

If you need to change your password, type /ns set password new_password. You will need to be logged in.



# select nick name
/nick yournickname

# better don't show your email address:
/ns set hide email on

# register (only one time needed) - PW is in clear text!!
/msg NickServ register [password] [email]

# identify yourself to the IRC server (always needed) (xxxx == pw)
/msg NickServ IDENTIFY xxxx

# Join a channel
/join #grass

----------------------------------------------------------------

Registering a channel:

1- To check whether a channel has already been registered, use the command:
/msg ChanServ info #Mohsen or ##Mohsen

2- /join #Mohsen

3- /msg ChanServ register #Mohsen

----------------------------------------------------------------

For gaining OP:

/MSG chanserv op #my_channel Mohsen_Hassani

----------------------------------------------------------------

+zip (Aug. 22, 2014, 11:25 a.m.)

To zip just one file (file.txt) to a zipfile (zipfile.zip), type the following:
zip zipfile.zip file.txt

To zip an entire directory:
zip -r zipfile.zip directory

zip -r -e saverestorepassword saverestore
The -e flag will prompt you to specify a password and then verify the password. You will see nothing happening in Terminal as you type the password. This will create a password protected zip file named saverestorepassword.zip containg your saverestore directory.
In the above examples, the name of the zip file can be whatever name you choose.

unzip test.zip

unzip test.zip -d music
This will extract the contents of test.zip to the music folder. Caveat, the directory must already exist.

Now let's extract the saverestorebackup.zip file. In this example I'll extract it to my music folder so I don't overwrite my current data in the saverestore folder. Again, this assumes you've just launched Terminal:
cd /media/internal
unzip saverestorebackup.zip -d music

In the above two examples, the -d flag indicates to extract the zip file to the directory specified, music in this case.
--------
For excluding a directory in zip:
zip -r test.zip test -x "path/to/exclusion/directory/*"
1-Take note that the exclusion path should be in quotes, and a star at the end.
2-There is a * (star) at the end of the command which is used to exclude `ALL` the sub-files and sub-directories, so don't forget it use it!
3-The path should not be started from '/home/mohsen/....' it should be started from the path you're using the command.

+Commands - ssh (Aug. 22, 2014, 11:22 a.m.)

SSH is some kind of an abbreviation of Secure SHell. It is a protocol that allows secure connections between computers.
To move the ssh service to another port:
ssh -p yourport yourusername@yourserver

Running a command on the remote server:
Sometimes, especially in scripts, you'll want to connect to the remote server, run a single command and then exit again. The ssh command has a nice feature for this. You can just specify the command after the options, username and hostname. Have a look at this:
ssh yourusername@yourserver updatedb
This will make the server update its searching database. Of course, this is a very simple command without arguments. What if you'd want to tell someone about the latest news you read on the web? You might think that the following will give him/her that message:
ssh yourusername@yourserver wall "Hey, I just found out something great! Have a look at www.examplenewslink.com!"
However, bash will give an error if you run this command:
bash: !": event not found
What happened? Bash (the program behind your shell) tried to interpret the command you wanted to give ssh. This fails because there are exclamation marks in the command, which bash will interpret as special characters that should initiate a bash function. But we don't want this, we just want bash to give the command to ssh! Well, there's a very simple way to tell bash not to worry about the contents of the command but just pass it on to ssh already: wrapping it in single quotes. Have a look at this:
ssh yourusername@yourserver 'wall "Hey, I just found out something great! Have a look at www.examplenewslink.com!"'
The single quotes prevent bash from trying to interpret the command, so ssh receives it unmodified and can send it to the server as it should. Don't forget that the single quotes should be around the whole command, not anywhere else.
------------------
sudo ssh-keygen -R hostname
------------------
Creating ssh key:
ssh-keygen -t rsa
------------------
When the server is just installed, the first access is possible via:
ssh-keygen -R <ip of server>
------------------
SSH Tunnel:
1-Create a user on the server:
adduser <username>

2-Copy the user's ssh_key from his computer to the server:
ssh-copy-id -i ~/.ssh/id_rsa.pub <username>@<server_ip>

3-Run this command on user's computer:
ssh -D <an optional port, like 9000> -fN <username>@<server_ip>

4-Change the Connection Settings of Mozilla, SOCKS Host:
localhost 9000

+GPG (Aug. 22, 2014, 11:21 a.m.)

1- apt install dirmngr

2- apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DB141E2302FDF932

+wget (Aug. 22, 2014, 11:18 a.m.)

ERROR: The certificate of `www.dropbox.com' is not trusted.
ERROR: The certificate of `www.dropbox.com' hasn't got a known issuer.

wget --no-check-certificate <url_link>

-------------------------------------------------------------

Mirror an entire website
wget -m http://google.com

-------------------------------------------------------------

Mirror entire website:

wget --mirror --random-wait --convert-links --adjust-extension --page-requisites --no-host-directories -erobots=off --no-cache http://domain.com/

-------------------------------------------------------------

Print file to stdout like curl does:

wget -O - http://exmaple.com/text.txt

-------------------------------------------------------------

Recursively download only files with the pdf extension upto two levels away:

wget -r -l 2 -A "*.pdf" http://papers.xtremepapers.com/CIE/Cambridge%20Checkpoint/

-------------------------------------------------------------

Get your external ip address from icanhazip.com and echo to STDOUT:

wget -O - http://icanhazip.com/ | tail

-------------------------------------------------------------

Open tarball without downloading:

wget -qO - "http://www.tarball.com/tarball.gz" | tar zxvf -

-------------------------------------------------------------

The option -c or --continue will resume an interrupted download:

wget -c https://scans.io/data/umich/https/certificates/raw_certificates.csv.gz

-------------------------------------------------------------

Download a list of urls from a file:

wget -i urls.txt

-------------------------------------------------------------

Save file into directory:

wget -P path/to/directory http://bropages.org/bro.html

-------------------------------------------------------------

Saves the HTML of a webpage to a particular file:

wget -O bro.html http://bropages.org/

-------------------------------------------------------------

Download entire website:

Short Version:
wget --user-agent="Mozilla" -mkEpnp http://example.org


Explanation:

wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org

Explanation of the various flags:

--mirror – Makes (among other things) the download recursive.
--convert-links – convert all the links (also to stuff like CSS stylesheets) to relative, so it will be suitable for offline viewing.
--adjust-extension – Adds suitable extensions to filenames (html or css) depending on their content-type.
--page-requisites – Download things like CSS style-sheets and images required to properly display the page offline.
--no-parent – When recursing do not ascend to the parent directory. It useful for restricting the download to only a portion of the site.

-------------------------------------------------------------

+Commands - lsof (Aug. 22, 2014, 11:12 a.m.)

lsof -i:<port>

Example: lsof -i:80

Displays the process which uses the port 80.


----------------------------------------------------------

+Commands - ps (Aug. 22, 2014, 11:06 a.m.)

ps

Lists all processes

----------------------------------------------------------

ps -A

Displays all processes

----------------------------------------------------------

kill + PID of process

Terminates a process

----------------------------------------------------------

+Changing the attributes of a file/directory (Aug. 22, 2014, 11:05 a.m.)

The attributes are read/write/execute for root/user/group with the values being:
4-2-1, 4-2-1, 4-2-1.

--------------------------------------------------------------

To give everyone execute-only permission to a file:

chmod 111

--------------------------------------------------------------

or all permissions, it'd be

chmod 777

--------------------------------------------------------------

Root only r/w/x would be

chmod 700

--------------------------------------------------------------

4 = owner
2 = group
1 = other

--------------------------------------------------------------

+Commands - ls (Aug. 22, 2014, 11:04 a.m.)

ls -r
Reverse order while sorting

--------------------------------------------------------------

ls -F
Shows executable files with '*' sign and link files with '@'

--------------------------------------------------------------

ls -t
Sort by time

--------------------------------------------------------------

+Shutting down (Aug. 22, 2014, 10:42 a.m.)

shutdown -r now
shutdown -r 7:00

+Directories (Aug. 22, 2014, 10:28 a.m.)

/bin - Essential user commands

The /bin directory contains essential commands that every user will need. This includes your login shell and basic utilities like ls. The contents of this directory are usually fixed at the time you install Linux. Programs you install later will usually go elsewhere.

---------------------------------------------------------------------------------

/usr/bin - Most user commands

The /usr hierarchy contains the programs and related files meant for users. (The original Unix makers had a thing for abbreviation.) The /usr/bin directory contains the program binaries. If you just installed a software package and don't know where the binary went, this is the first place to look. A typical desktop system will have many programs here.

---------------------------------------------------------------------------------

/usr/local/bin - "Local" commands

When you compile software from source code, those install files are usually kept separate from those provided as part of your Linux distribution. That is what the /usr/local/ hierarchy is for.

---------------------------------------------------------------------------------

/sbin - Essential System Admin Commands

The /sbin directory contains programs needed by the system administrator, like fsck, which is used to check file systems for errors. Like /bin, /sbin is populated when you install your Linux system, and rarely changes.

---------------------------------------------------------------------------------

/usr/sbin - Non-essential System Administration Programs (binaries)

This is where you will find commands for optional system services and network servers. Desktop tools will not show up here, but if you just installed a new mail server, this is where to look for the binaries.

---------------------------------------------------------------------------------

/usr/local/sbin - "Local" System Administration Commands

When you compile servers or administration utilities from source code, this is where the binaries normally will go.

---------------------------------------------------------------------------------

Libraries:

Libraries are shared bits of code. On Windows these are called DLL files (Dynamic Loading Libraries). On Linux systems they are usually called SO (Shared Object) files. As to location, are you detecting a pattern yet? There are three directories where library files are placed: /lib, /usr/lib, and /usr/local/lib.

---------------------------------------------------------------------------------

Documentation:

Documentation is a minor exception to the pattern of file placement. Pages of the system manual (man pages) follow the same pattern as the programs they document: /man, /usr/man, and /usr/local/man. You should not access these files directly, however, but by using the man command.
Many programs install addition documentation in the form of text files, HTML, or other things not man pages. This extra documentation is stored in directories under /usr/share/doc or /usr/local/share/doc. (On older systems you may find this under /usr/doc instead.)

---------------------------------------------------------------------------------

+Tarballs (Tar Archive) (Aug. 22, 2014, 10:21 a.m.)

tar -xzvf filename.tar.gz

x : eXtract
j : deal with bzipped file
f : read from a file (rather than a tape device)

-------------------------------------------------------------

Creating a tar File:
tar -cvf output.tar /dirname

tar -cvf Projects.tar Projects --exclude=Projects/virtualenvs --exclude=".buildozer" --exclude=".git"

tar -cvf output.tar /dirname1 /dirname2 filename1 filename2

tar -cvf output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt

tar -cvf /tmp/output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt

Where,

-c : Create a tar ball.
-v : Verbose output (show progress).
-f : Output tar ball archive file name.
-x : Extract all files from archive.tar.
-t : Display the contents (file list) of an archive.

-------------------------------------------------------------

Create a tar Archive File:
tar -cf abcd.tar /home/mohsen/abcd


Untar Single file from tar File:
tar -xf abcd.tar x.png
OR
tar --extract --file=abcd.tar x.png


Untar Multiple files:
tar -xf abcd.tar "x.png" "y.png" "z.png"

-------------------------------------------------------------

Create tar.gz Archive File (compressed gzip archive):
tar -czf abcd.gz /home/mohsen/abcd


Uncompress tar.gz Archive File:
tar -xf abcd.tar.gz
tar -xf abcd.tar.gz -C /home/mohsen/Temp/


List Content tar.gz Archive File:
tar -tvf abcd.tar.gz


Untar Single file from tar.gz File:
tar -zxf abcd.tar.gz x.png
tar --extract --file=abcd.tar.gz x.png


Untar Multiple files:
tar -zxf abcd.tar.gz "x.png" "y.png" "z.png"

-------------------------------------------------------------

Create tar.bz2 Archive File:

The bz2 feature compresses and creates archive files less than the size of the gzip. The bz2 compression takes more time to compress and decompress files as compared to gzip which takes less time.

tar -cfj abcd.tar.bz2 /home/mohsen/abcd


Uncompress tar.bz2 Archive File:
tar -xf abcd.tar.bz2


List content tar.bz2 archive file:
tar -tvf abcd.tar.bz2


Untar single file from tar.bz2 File:
tar -jxf abcd.tar.bz2 home/mohsen/x.png
tar --extract --file=abcd.tar.bz2 /home/mohsen/x.png


Untar multiple files:
tar -jxf abcd.tar.bz2 "x.png" "y.png" "z.png"

-------------------------------------------------------------

Extract group of files using wildcard:
tar -xf abcd.tar --wildcards '*.png'
tar -zxf abcd.tar.gz --wildcards '*.png'
tar -jxf abcd.tar.bz2 --wildcards '*.png'

-------------------------------------------------------------

Add files or directories to tar archive file:
Use the option r (append)

tar -rf abcd.tar m.png
tar -rf abcd.tar images


The tar command doesn’t have an option to add files or directories to an existing compressed tar.gz and tar.bz2 archive file. If we do try will get the following error:
tar: This does not look like a tar archive
tar: Skipping to next header

-------------------------------------------------------------

Create a tar archive using xz compression:
tar -cJf abcd.tar.xz /path/to/archive/

Decompression:
tar xf abcd.tar.xz

-------------------------------------------------------------

Compress supporting source and destination directory:
tar -cf /home/mohsen/Temp/abcd.tar -P /home/mohsen/Temp/abcd
tar -cPf /home/mohsen/Temp/abcd.tar /home/mohsen/Temp/abcd

-------------------------------------------------------------

Tar Usage and Options:

c – create a archive file.
x – extract a archive file.
v – show the progress of archive file.
f – filename of archive file.
t – viewing content of archive file.
j – filter archive through bzip2.
z – filter archive through gzip.
r – append or update files or directories to existing archive file.
W – Verify a archive file.
wildcards – Specify patterns in unix tar command.

-P (--absolute-names) – don't strip leading '/'s from file names

-------------------------------------------------------------

xz:
tar -cJf my_folder.tar.xz my_folder

-------------------------------------------------------------

tar zc --exclude node_modules -f tiptong.tar.gz tiptong

-------------------------------------------------------------

Extract to a different directory:

tar -xf file.name.tar -C /path/to/directory

tar xf file.tar --directory /path/to/directory

-------------------------------------------------------------

+apt-get (Aug. 22, 2014, 10:21 a.m.)

apt-get upgrade
Updating the software

apt-get -s upgrade
To simulate an update installation, i.e. to see which software will be updated.

+Search for text in files (Aug. 9, 2015, 9:45 p.m.)

find . -name "*.txt" | xargs grep -i "text_pattern"

------------------------------------------------------------------------

find / -type f -exec grep -l "text-to-find-here" {} \;

------------------------------------------------------------------------

grep word_to_find file_name -n --c

The --c is for coloring the words

------------------------------------------------------------------------

grep "<the word or text to be searched>" / -Rn --color -T


Description:

/: The location to be searched

R: Search in recursive mode

n: Display the number of the line in which the occurrence word or text is located

color: Display the search result colored

T: Separate the search result with a tab

l: stands for "show the file name, not the result itself"

------------------------------------------------------------------------

grep -Rin "text-to-find-here" /

OR

grep --color -Rin "text-to-find-here" / (to make it colorful)

OR

egrep -w -R 'word1|word2' ~/projects/ (for two words)


i stands for upper/lower case
w stands for the whole word

----------------------------------------------------------------

Find specific files and search for specific words:

find . -name '*.py' -exec grep -Rin 'resize' {} +

Finds the word `resize` in python files.

OR

find -iname "*.py" | xargs grep -i django

----------------------------------------------------------------

Command "grep", only in certain file extensions:

grep -Rnw 'YEAR' --include \*.py

----------------------------------------------------------------

+dpkg (Aug. 22, 2014, 10:19 a.m.)

dpkg --get-selections
To get list of all installed software

dpkg-query -W
To get list of installed software packages

dpkg -l
Description of installed software packages

+sources.list (Aug. 22, 2014, 9:58 a.m.)

deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main

deb http://ftp.debian.org/debian/ jessie-updates main
deb-src http://ftp.debian.org/debian/ jessie-updates main


deb http://ftp.debian.org/debian/ jessie main
deb-src http://ftp.debian.org/debian/ jessie main

----------------------------------------------------------------------------------------

deb http://deb.debian.org/debian stretch main
deb-src http://deb.debian.org/debian stretch main

deb http://deb.debian.org/debian stretch-updates main
deb-src http://deb.debian.org/debian stretch-updates main

deb http://security.debian.org/debian-security/ stretch/updates main
deb-src http://security.debian.org/debian-security/ stretch/updates main

----------------------------------------------------------------------------------------

+PIP (Aug. 22, 2014, 9:14 a.m.)

pip install SomePackage # latest version

pip install SomePackage==1.0.4 # specific version

pip install 'SomePackage>=1.0.4' # minimum version

pip install -r requirements.txt

pip install --upgrade SomePackage

------------------------------------------------------------------------

Install a package with setuptools extras.

pip install SomePackage[PDF]

pip install SomePackage[PDF]==3.0

pip install -e .[PDF]==3.0 # editable project in current directory

------------------------------------------------------------------------

Install a particular source archive file.

pip install ./downloads/SomePackage-1.0.4.tar.gz
pip install http://my.package.repo/SomePackage-1.0.4.zip

------------------------------------------------------------------------

Install from alternative package repositories. (Install from a different index, and not PyPI):
pip install --index-url http://my.package.repo/simple/ SomePackage

Search an additional index during install, in addition to PyPI:
pip install --extra-index-url http://my.package.repo/simple SomePackage

Install from a local flat directory containing archives (and don’t scan indexes):
pip install --no-index --find-links:file:///local/dir/ SomePackage
pip install --no-index --find-links:/local/dir/ SomePackage
pip install --no-index --find-links:relative/dir/ SomePackage

------------------------------------------------------------------------

Find pre-release and development versions, in addition to stable versions.
By default, pip only finds stable versions.

pip install --pre SomePackage

--------------------------------------------------------------------------

pip uninstall [options] <package> ...
pip uninstall [options] -r <requirements file> ...

Options:

-r, --requirement <file>
Uninstall all the packages listed in the given requirements file. This option can be used multiple times.

-y, --yes
Don't ask for confirmation of uninstalling deletions.

--------------------------------------------------------------------------

pip freeze [options]

Description:
Output installed packages in requirements format.

Options:
-r, --requirement <file>
Use the order in the given requirements file and it’s comments when generating output.

-f, --find-links <url>
URL for finding packages, which will be added to the output.

-l, --local
If in a virtualenv that has global access, do not output globally-installed packages.

Examples:
Generate output suitable for a requirements file.
$ pip freeze
Jinja2==2.6
Pygments==1.5
Sphinx==1.1.3
docutils==0.9.1

Generate a requirements file and then install from it in another environment.
$ env1/bin/pip freeze > requirements.txt
$ env2/bin/pip install -r requirements.txt

--------------------------------------------------------------------------

pip list [options]

Description:
List installed packages, including editable ones.

Options:
-o, --outdated
List outdated packages (excluding editables)

-u, --uptodate
List up-to-date packages (excluding editables)

-e, --editable
List editable projects.

-l, --local
If in a virtualenv that has global access, do not list globally-installed packages.

--pre
Include pre-release and development versions. By default, pip only finds stable versions.

Examples:
List installed packages.
$ pip list
Pygments (1.5)
docutils (0.9.1)
Sphinx (1.1.2)
Jinja2 (2.6)

List outdated packages (excluding editables), and the latest version available
$ pip list --outdated
docutils (Current: 0.9.1 Latest: 0.10)
Sphinx (Current: 1.1.2 Latest: 1.1.3)

--------------------------------------------------------------------------

pip show [options] <package> ...

Description:
Show information about one or more installed packages.

Options:
-f, --files
Show the full list of installed files for each package.

Examples:
Show information about a package:
$ pip show sphinx
`the output will be`:
Name: Sphinx
Version: 1.1.3
Location: /my/env/lib/pythonx.x/site-packages
Requires: Pygments, Jinja2, docutils

--------------------------------------------------------------------------

pip search [options] <query>

Description:
Search for PyPI packages whose name or summary contains <query>.

Options:
--index <url>
Base URL of Python Package Index (default https://pypi.python.org/pypi)

Examples:
Search for “peppercorn”
pip search peppercorn
pepperedform - Helpers for using peppercorn with formprocess.
peppercorn - A library for converting a token stream into [...]

--------------------------------------------------------------------------

pip zip [options] <package> ...

Description:
Zip individual packages.

Options:
--unzip
Unzip (rather than zip) a package.

--no-pyc
Do not include .pyc files in zip files (useful on Google App Engine).

-l, --list
List the packages available, and their zip status.

--sort-files
With –list, sort packages according to how many files they contain.

--path <paths>
Restrict operations to the given paths (may include wildcards).

-n, --simulate
Do not actually perform the zip/unzip operation.

--------------------------------------------------------------------------

This command will download the zipped/tar file in the specified location:
pip download `package_name`


pip download \
--only-binary=:all: \
--platform linux_x86_64 \
--python-version 33 \
--implementation cp \
--abi cp34m \
pip>=8


pip download \
--only-binary=:all: \
--platform macosx-10_10_x86_64 \
--python-version 27 \
--implementation cp \
SomePackage

--------------------------------------------------------------------------

pip install --allow-all-external pil --allow-unverified pil

--------------------------------------------------------------------------

ReadTimeoutError: HTTPSConnectionPool(host='pypi.python.org', port=443)

pip install --default-timeout=200 <package_name>

--------------------------------------------------------------------------

pip install pip-review

pip-review --local --interactive

--------------------------------------------------------------------------

mkdir pip_files && cd pip_files

pip download -r requirements.txt

--------------------------------------------------------------------------

+Irancell WiMAX modem (Aug. 17, 2015, 9:55 a.m.)

For installing the driver, install these packages first:
apt-get install linux-headers-`uname -r` libssl-dev usb-modeswitch zip

---------------------------------------------------------------------------------------------

The wimaxd would not get recognized by the terminal. So I copied it in the /bin directory.
There was an error "error while loading shared libraries: libeap_supplicant.so cannot open shared object file" so I did the following:
To fix the problem, I added the "libeap_supplicant.so" path to /etc/ld.so.conf and re-ran ldconfig.

Another incident which is not related to WiMAX, is that, one day when I was installing and running Apache, there was an error similar to this error of WiMAX: "error while loading shared libraries: libexpat.so.0: cannot open shared object file", so I searched for the file using "locate" command and copied it in the address "/usr/lib" and ran Apache, it was solved!

---------------------------------------------------------------------------------------------

WiMAX linux-headers error:

make: *** /lib/modules/3.13.0-37-generic/source: No such file or directory. Stop.

1-rm /lib/modules/3.13.0-37-generic/source
2-ln -s /usr/src/linux-headers-3.13.0-37 /lib/modules/3.13.0-37-generic/source

--------------------------------------------------------------------------------------------

Usage:

1-su
2-wimaxd -D -c wimaxd.conf
3- (in another console) wimaxc -i
3.1-search
3.2-connect
4-(in another console) su
4.1-dhclient eth1

--------------------------------------------------------------------------------------------

+Version, Distro, Release (Aug. 4, 2014, 4:38 a.m.)

uname -r

----------------------------------------------------------------------------

Find or identify which version of Debian Linux you are running:

cat /etc/debian_version

----------------------------------------------------------------------------

What is my current linux distribution

cat /etc/issue

----------------------------------------------------------------------------

How Do I Find Out My Kernel Version?

uname -mrs

----------------------------------------------------------------------------

lsb_release Command:

The lsb_release command displays certain LSB (Linux Standard Base) and distribution-specific information.
lsb_release -a

----------------------------------------------------------------------------

+Hardware & Driver information (Aug. 4, 2014, 4:37 a.m.)

lshw

----------------------------------------------------------------------------

See PCI devices along with their kernel modules (device drivers):

lspci -k

It first shows you all the PCI devices attached to your system and then tells you what kernel modules (device drivers), are being used by them.

----------------------------------------------------------------------------

Identify Computer Model:

sudo grep "" /sys/class/dmi/id/[bpc]*

----------------------------------------------------------------------------

+Sudoer (Aug. 4, 2014, 4:36 a.m.)

visudo
Scroll to the bottom of the page and enter:
mohsen ALL=(ALL) ALL

Mac OS
+VMware Tools (Jan. 23, 2017, 1:16 p.m.)

Darwin Image for VMware Tools for Mac OS X:
http://www.insanelymac.com/forum/files/file/31-vmware-tools-for-os-x-darwiniso/

+Password Reset (Sept. 12, 2016, 12:39 a.m.)

1-Turn off your Mac (choose Apple > Shut Down).
2-Press the power button while holding down Command-R. The Mac will boot into Recovery mode. ...
3-Select Disk Utility and press Continue.
4-Choose Utilities > Terminal.
5-Enter resetpassword (all one word, lowercase letters) and press Return.
6-Select the volume containing the account (normally this will be your Main hard drive).
7-Choose the account to change with Select the User Account.
8-Enter a new password and re-enter it into the password fields.
9-Enter a new password hint related to the password.
10-Click Save.
11-A warning will appear that the password has changed, but not the Keychain Password. Click OK.
12-Click Apple > Shut Down.

Now start up the Mac. You can login using the new password.

+Install Ionic (June 21, 2016, 11:08 p.m.)

brew install npm

sudo npm install -g cordova ionic

npm install -g ios-sim

npm install -g ios-deploy
-----------------------------
ionic platfrom add ios
ionic resources
-----------------------------
ionic build ios

+Speed Up Mac by Disabling Features (June 21, 2016, 11:13 p.m.)

Disable Open/Close Window Animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
-------------------------------------
Disable Quick Look Animations
defaults write -g QLPanelAnimationDuration -float 0
-------------------------------------
Disable Window Size Adjustment Animations
defaults write NSGlobalDomain NSWindowResizeTime -float 0.001
-------------------------------------
Disable Dock Animations

defaults write com.apple.dock launchanim -bool false
-------------------------------------
Disable the “Get Info” Animation
defaults write com.apple.finder DisableAllAnimations -bool true
-------------------------------------
Get rid of Dashboard
defaults write com.apple.dashboard mcx-disabled -boolean YES
killall Dock
-------------------------------------
Speed Up Window Resizing Animation Speed
defaults write -g NSWindowResizeTime -float 0.003
-------------------------------------
Disable The Eye Candy Transparent Windows & Effects
System Preferences -> Accessibility -> Display
Check the box for “Reduce Transparency”
-------------------------------------
Disable Unnecessary Widgets & Extensions in Notifications Center
System Preferences -> Extensions -> Today
Uncheck all options you don’t need or care about
-------------------------------------

+Disable SIP (June 20, 2016, 12:37 a.m.)

csrutil status
csrutil disable
reboot

+Recovery HD partition with El Capitan bootable via Clover (June 19, 2016, 7:46 p.m.)

1- diskutil list
You will get the partition list, note that the Recovery Partition is obviously named "Recovery HD"

2- Create a folder in Volumes folder for Recovery HD and mount it there:
sudo mkdir /Volumes/Recovery\ HD
sudo mount -t hfs /dev/disk0s3 /Volumes/Recovery\ HD

3- Remove the file `prelinkedkernel`from the directory `com.apple.recovery.boot`
sudo rm -rf /Volumes/Recovery\ HD/com.apple.recovery.boot/prelinkedkernel

4- Copy your working `prelinkedkernel` there:
sudo cp /System/Library/PrelinkedKernels/prelinkedkernel /Volumes/Recovery\ HD/com.apple.recovery.boot/

5- Reboot

+Mac OS X on Virtualbox (June 12, 2016, 3:29 p.m.)

vboxmanage modifyvm "Mac OS X 10.11" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac11,3"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Iloveapple"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1

VBoxManage setextradata "Mac OS X 10.11" "VBoxInternal2/EfiBootArgs" " "

+Convert Installation DMG to ISO - Create a Bootable ISO (June 11, 2016, 10:04 p.m.)

You need to run these commands on a Mac OS X:

# Mount the installer image
hdiutil attach /Applications/Install\ OS\ X\ El\ Capitan.app/Contents/SharedSupport/InstallESD.dmg
-noverify -nobrowse -mountpoint /Volumes/install_app

# Create the ElCapitan Blank ISO Image of 7316mb with a Single Partition - Apple Partition Map
hdiutil create -o /tmp/ElCapitan.cdr -size 7316m -layout SPUD -fs HFS+J

# Mount the ElCapitan Blank ISO Image
hdiutil attach /tmp/ElCapitan.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build

# Restore the Base System into the ElCapitan Blank ISO Image
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase

# Remove Package link and replace with actual files
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/

# Copy El Capitan installer dependencies
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg

# Unmount the installer image
hdiutil detach /Volumes/install_app

# Unmount the ElCapitan ISO Image
hdiutil detach /Volumes/OS\ X\ Base\ System/

# Convert the ElCapitan ISO Image to ISO/CD master (Optional)
hdiutil convert /tmp/ElCapitan.cdr.dmg -format UDTO -o /tmp/ElCapitan.iso

# Rename the ElCapitan ISO Image and move it to the desktop
mv /tmp/ElCapitan.iso.cdr ~/Desktop/ElCapitan.iso

+Commands (June 9, 2016, 1:45 p.m.)

Locate command:
To create the database for using `locate` command, run the following command:
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist

updatedb ==> sudo /usr/libexec/locate.updatedb
----------------------------------------------------------------------

+Installing Xcode (June 6, 2016, 3:31 p.m.)

For downloading Xcode or other development tools, you need to log into apple.com using your Apple ID account and then open the following link:
https://developer.apple.com/downloads/

Download Xcode and Command Line Tools!

+Applications (June 5, 2016, 2:04 p.m.)

brew install proxychains-ng

sudo nano /usr/local/Cellar/proxychains-ng/4.11/etc/proxychains.conf
----------------------------------------------------------------
brew install npm
----------------------------------------------------------------
brew install ssh-copy-id
----------------------------------------------------------------
brew install tmux
----------------------------------------------------------------

+Installing Homebrew (June 5, 2016, 1:47 p.m.)

Reference Site:
http://brew.sh/
------------------------------------------------
1-You need to install Developer Tools first. Using the `gcc --version` command check if you have the tools first. If the tools were not installed, a dialog will be opened asking you if you want to install the tools. You choose Install.

2-The website says you only need to use the following command to install brew. (But it might be blocked for us in Iran, as of the time writing this tutorial):
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

If it was still blocked, for installing it you need to open the following URL in a proxy activated browser, and save the script in your Mac OS:
https://raw.githubusercontent.com/Homebrew/install/master/install

Install it using this command:
ruby brew.sh

Mail Server
+What is reverse DNS (rDNS)? (Feb. 12, 2020, 3:07 p.m.)

Reverse DNS, or rDNS, does the opposite of the traditional DNS. That is, instead of resolving a domain name to an IP, it resolves an IP to a hostname.


The rDNS resolution is a completely separate mechanism from the regular DNS resolution. For example, if the domain “yourcompany.com” points to IP 1.2.3.4 (dummy IP address), it doesn’t necessarily mean that the reverse resolution for the IP is 1.2.3.4.

To store rDNS records, there’s a specific type of DNS record called the PTR record. This record is also known as the “resource record” (RR), and specifies the IP addresses of all systems using an inverted notation.

This rDNS configuration allows you to search for an IP in the DNS, since the inaddr.arpa domain is added to the inverted IP notation, turning the IP into a domain name.

For example: in order to convert the IP address 1.2.3.4 into a PTR record, we need to invert the IP and add the domain inaddr.arpa which results in the following record: 4.3.2.1.in-addr.arpa.

--------------------------------------------------------------------------------------------

When is rDNS useful?

If you want to prevent email issues. If you’re hosting your own email server, rDNS becomes pretty useful for your outgoing emails. An rDNS record allows tracing the origin of the email, increasing the credibility of the email server, and becoming a trusted source for many popular email providers such as Gmail, Yahoo, Hotmail, and others. Some incoming email servers won’t even let your email arrive at their email boxes if you don’t have an rDNS record setup. So if you’re using your own mail server, you’ll want to keep it in mind.

When you’re performing a cybercrime investigation. Another popular use of reverse DNS records is to identify potential threats and mass scanners throughout the Internet. By using both security API endpoints, or web-based products like SurfaceBrowser, you or your team can easily identify authors and networks behind mass scanning, malware spreading or other types of malicious activities.

--------------------------------------------------------------------------------------------

How can I perform a reverse DNS lookup?

There are many methods and rDNS lookup tools in use for doing the opposite of a normal DNS check: resolving a given IP to host.

Some of these web-based utilities are known as reverse DNS tools, and they all do the same thing, query a given IP to resolve a hostname. Let’s look at some terminal-based examples first:

dig -x 1.1.1.1

host 1.1.1.1

--------------------------------------------------------------------------------------------

+Difference Between Maildir and Mbox Directory Structure (Feb. 12, 2020, 1:03 p.m.)

Maildir and Mbox are email formats that act as a directory for storing messages in email applications. Mbox was the original mail storage system on a cPanel server, but now Maildir is the default option. Mbox places all messages in the same file on the server, whereas, Maildir stores messages in individual files with unique names.



Maildir

Directories in the Maildir format has three subdirectories. They are:

1) new: Each file in a new subdirectory is all incoming email messages received in a limited time. It is used for notifying the user to have a new message. The modification time of the files in the new directory is the delivery date of the message. The message is normally in RFC 822 format in which it starts with a “Return-path” line and a “Delivered-to” line.

2) Cur: The files in the cur directory are the same as the new directory but, the files in cur are no longer new mail. They have been seen by the user’s mail reading program. That is, it saves only those messages, which have been read by the user.

3) tmp: tmp directory includes a temporary data file associated with the Maildir file extension directory. It is used for ensuring the reliable delivery of the message.



Benefits of Maildir

1) Maildir is more current.

2) Faster and stable than mbox.

3) The main advantage of this file format is that it can easily classify into subdirectories. When a new message arrives, it filters accordingly and moves in the respective subdirectories.

4) These files can be distributed over the network without any compatibility issues.

5) Compatible with both courier and dovecot mail servers.

6) Most secure format and minimum chances of data corruption.

7) Maildir directory creates one single file for every incoming mail messages.



Mailbox

Mailbox file format is also known as Mbox. Mbox is an email file type, which stores messages in plain text format. The email contents in the file comprise in the form of 7-bit ASCII text and the rest of the email components (attachments, metadata, etc..) are stored in encoded form. Mailbox works in a single file format in which all email messages are stored in a single file on the account, usually inbox.



Benefits of Mbox

1) The file format is universally supported.

2) Appending new mail in the mailbox is faster.

3) Searching text inside the mailbox is faster.

It has some file locking problems and problems when used with network file systems.

+PostfixAdmin (Feb. 10, 2020, 10:07 a.m.)

1- Download the latest version of PostfixAdmin:
cd /srv/
wget -O postfixadmin.tgz https://github.com/postfixadmin/postfixadmin/archive/postfixadmin-3.2.tar.gz
tar -zxf postfixadmin.tgz
mv postfixadmin-postfixadmin-3.2 postfixadmin



2- Copy the "PHP Configuration" from my notes in "Nginx" category to nginx sites-enabled.
server_name postfix.mohsenhassani.com;
root /srv/postfixadmin/public;



3- Create a PostgreSQL user "postfix" and a database named "postfix"



4- Create /srv/postfixadmin/config.local.php file for your local configuration.
vim /srv/postfixadmin/config.local.php

Configure PostfixAdmin so it can find the database. Add the following lines to config.local.php:
<?php
$CONF['database_type'] = 'pgsql';
$CONF['database_user'] = 'postfix';
$CONF['database_password'] = 'your_password';
$CONF['database_name'] = 'postfix';

$CONF['configured'] = true;
?>

You can see config.inc.php for all available config options and their default value.
You can also edit config.inc.php instead of creating a config.local.php, but this will make updates harder and is therefore not recommended.



5- Create a template directory for smarty cache:
mkdir -p /srv/postfixadmin/templates_c
chown -R www-data /srv/postfixadmin/templates_c



6- Install the following packages:
apt install php7.3-imap dovecot-pgsql postfix-pgsql dovecot-pop3d dovecot-imapd dovecot-lmtpd



7- Check settings, and create Admin user.
Restart nginx and open the following link in your computer browser:
http://postfix.mohsenhassani.com/setup.php

You will be asked to set a setup password. After setting it, you will be given a hash password. Put it in the config file you created at the earlier steps.
$CONF['setup_password'] = ''

Then you will be asked to create a superadmin account.



8- Since we are configuring a mail server with virtual users we need one system user which will be the owner of all mailboxes and will be used by the virtual users to access their email messages on the server.
groupadd -g 5000 vmail
useradd -u 5000 -g vmail -s /usr/sbin/nologin -d /var/mail/vmail -m vmail



9- Dovecot setup
vim /etc/dovecot/conf.d/10-mail.conf
mail_location = maildir:/var/mail/vmail/%d/%n/

If you don't have ssl:
vim /etc/dovecot/conf.d/10-ssl.conf
ssl = no

Login for outlook express and mobile applications:
vim /etc/dovecot/conf.d/10-auth.conf
disable_plaintext_auth = yes
auth_mechanisms = plain login
Comment this line so that you don't get errors like "pam_authenticate() failed: Authentication failure". We are using virtual user (from database) no need for PAM which is for operating system user authentications.
#!include auth-system.conf.ext
Uncommend this line:
!include auth-sql.conf.ext

vim /etc/dovecot/dovecot-sql.conf.ext
driver = pgsql
password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1'
user_query = SELECT '/var/mail/vmail/' || maildir AS home, 5000 AS uid, 5000 AS gid, '*:bytes=' || quotaAS quota_rule FROM mailbox WHERE username = '%u' AND active = true
connect = host=localhost dbname=postfix user=postfix password=my_password
default_pass_scheme = MD5 # depends on your $CONF['encrypt'] Postfixadmin settings



10- Add the following lines to Postfix configurations file:
vim /etc/postfix/main.cf

relay_domains = $mydestination, proxy:pgsql:/etc/postfix/pgsql/relay_domains.cf
virtual_alias_maps = proxy:pgsql:/etc/postfix/pgsql/virtual_alias_maps.cf
virtual_mailbox_domains = proxy:pgsql:/etc/postfix/pgsql/virtual_domains_maps.cf
virtual_mailbox_maps = proxy:pgsql:/etc/postfix/pgsql/virtual_mailbox_maps.cf
virtual_mailbox_base = /var/mail/vmail
virtual_mailbox_limit = 512000000
virtual_minimum_uid = 8
virtual_transport = virtual
virtual_uid_maps = static:8
virtual_gid_maps = static:8
local_transport = virtual
local_recipient_maps = $virtual_mailbox_maps

# SASL Auth for SMTP relaying
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_authenticated_header = yes
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
broken_sasl_auth_clients = yes


11- Create a folder and some config files, then add the following lines in each file:
mkdir /etc/postfix/pgsql

vim /etc/postfix/pgsql/relay_domains.cf
user = postfix
password = whatever
hosts = localhost
dbname = postfix
query = SELECT domain FROM domain WHERE domain='%s'

vim /etc/postfix/pgsql/virtual_alias_maps.cf
user = postfix
password = whatever
hosts = localhost
dbname = postfix
query = SELECT goto FROM alias WHERE address='%s' AND active = true

vim /etc/postfix/pgsql/virtual_domains_maps.cf
user = postfix
password = whatever
hosts = localhost
dbname = postfix
#query = SELECT domain FROM domain WHERE domain='%s'
#optional query to use when relaying for backup MX
query = SELECT domain FROM domain WHERE domain='%s' and backupmx = false and active = true

vim /etc/postfix/pgsql/virtual_mailbox_maps.cf
user = postfix
password = whatever
hosts = localhost
dbname = postfix
query = SELECT maildir FROM mailbox WHERE username='%s' AND active = true

chmod 777 /etc/postfix/pgsql -R
chown root:postfix /etc/postfix/pgsql -R
postfix set-permissions



12- Enable Roundcube password plugin to enable database-based authentication:
vim /srv/roundcubemail/config/config.inc.php
// Enable plugins
$config['plugins'] = array('managesieve','password');
// Configure managesieve plugin
$rcmail_config['managesieve_port'] = 4190;
// Configure password plugin
$config['password_driver'] = 'sql';
$config['password_db_dsn'] = 'pgsql://postfix:my_password@localhost/postfix';
$config['password_query'] = 'UPDATE mailbox SET password=%c WHERE username=%u';

---------------------------------------------------------------------------------

Debug:


These postmap queries should return the found string:

Note that we are NOT authenticating against the credentials set for each email account, we are only testing the ability of Postfix to detect those records in the database.

postmap -q nozhanrayan.com pgsql:/etc/postfix/pgsql/virtual_domains_maps.cf
postmap -q ceo@nozhanrayan.com pgsql:/etc/postfix/pgsql/virtual_alias_maps.cf


doveadm auth test -x service=imap -x rip=127.0.0.1 mohsen@mohsenhassani.com


tail -f /var/log/mail*.log


If you're having trouble, try uncommenting the following lines in the file:
vim /etc/dovecot/conf.d/10-logging.conf
auth_debug = yes
auth_debug_passwords = yes
auth_verbose = yes

---------------------------------------------------------------------------------

+Roundcube - Enable emoticons plugin (Dec. 25, 2019, 8:57 p.m.)

1- Edit the file config.inc.php
/srv/roundcube/config/config.inc.php


2- Add 'emoticons' to line 49:
$config['plugins'] = array('emoticons')

+Virtual domains (Aug. 22, 2014, 9:54 a.m.)

1- Add these lines to /etc/postfix/main.cf
virtual_alias_domains = mohsenhassani.com nozhanrayan.com
virtual_alias_maps = hash:/etc/postfix/virtual


2- Create a file "/etc/postfix/virtual" and specify the domains and users to accept mail for.
info@mohsenhassani.com mohsen
accounting@mohsenhassani.com mohsen

info@nozhanrayan.com nozhanrayan
accounting@nozhanrayan.com nozhanrayan


3- postmap /etc/postfix/virtual


4- /etc/init.d/postfix restart

+Find Postfix mail server version (Dec. 15, 2018, 2:54 a.m.)

postconf -d mail_version

+Roundcube (Dec. 15, 2019, 2:52 a.m.)

1- You will need these packages for Roundcube installer:
apt install php-mbstring php-gd php-imagick php-pgsql php-intl php-pear php-zip php-common php-cli php-fpm



2- Download and extract the latest "complete" Roundcube version from:
https://roundcube.net/download/
Extract it and give it write/read permission:
chmod 777 roundcubemail -R



3- Copy the "PHP Configuration" from my notes in "Nginx" category to nginx sites-enabled.



4- Create a PostgreSQL user "roundcube", with a password, and a database named "roundcubemail".
You need the initial SQL database structure for PostgreSQL database. This file exists in the root folder of the roundcube you just downloaded, "roundcubemail/SQL/postgres.initial.sql". Use the following command to load the structure into the database:
psql -U roundcube -f /srv/roundcubemail/SQL/postgres.initial.sql roundcubemail


When setting configurations in step 6, if you got error "DB Schema: NOT OK(Database schema differs)" you might need another version of the above structure file. You can download it from the following link:
https://github.com/roundcube/roundcubemail/tree/master/SQL/postgres
You need to DOWNLOAD the file as raw, do not download the file directly. Click on the link, then click "raw" and copy the link from browser URL, download the raw file using wget, something like the following link:
https://raw.githubusercontent.com/roundcube/roundcubemail/master/SQL/postgres.initial.sql
psql -U roundcube -f postgres.initial.sql roundcubemail



5- Edit the file "/etc/php/7.3/fpm/php.ini" and set:
date.timezone = 'Asia/Tehran'
upload_max_filesize = 300M
post_max_size = 300M



6- After restarting the required services, such as Nginx and probably php7.0-fpm, browse the address:
http://mail.mohsenhassani.com/installer/



7- Add the following line to the file /srv/roundcube/config/config.inc.php:
$config['mail_domain'] = 'mail.mohsenhassani.com';
$config['smtp_port'] = 25;



8- Enable creation of primary folders upon user login:
vim /srv/roundcubemail/config/defaults.inc.php
$config['create_default_folders'] = true;

-----------------------------------------------------------------------------


You can edit the settings and configurations you have selected or filled-up in the installer web page using this file:
roundcube/config/config.inc.php

-----------------------------------------------------------------------------

For debug purpose:
tail -f /srv/roundcube/logs/errors
tail -f /var/log/mail*.log

+Web Mail Installation (Dec. 15, 2019, 2:52 a.m.)

apt install postfix dovecot-core dovecot-imapd

----------------------------------------------------

For connecting your cellphone to the webmail:

Add these lines to /etc/postfix/main.cf
mydestination = mohsenhassani.com (Do not put mail.mohsenhassani.com. Only the main domain name!)
smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
message_size_limit = 102400000

Edit these lines from /etc/dovecot/conf.d/10-auth.conf:
disable_plaintext_auth = no
auth_mechanisms = plain login


If there was any problem when connecting cellphone to your webmail, check the logs for solving the problems:
tail -f /var/log/mail*.log

----------------------------------------------------

Edit the file /etc/dovecot/conf.d/10-master.conf:

# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0666
user = postfix
group = postfix
}

----------------------------------------------------

For having "Maildir", edit the file /etc/dovecot/conf.d/10-mail.conf:
mail_location = maildir:~/Maildir

And the file /etc/postfix/main.cf:
home_mailbox = Maildir/

mkdir ~/Maildir
chmod 700 ~/Maildir
chown mohsen:mohsen ~/Maildir

----------------------------------------------------

After making the above changes, restart the services:
dovecot
postfix

----------------------------------------------------

Debug IMAP:

telnet mail.mohsenhassani.com 143

Now type each line as a command:
a login USERNAME PASSWORD
a examine inbox
a logout

----------------------------------------------------

When receiving mails, I noticed "delivered to command: procmail -a" message in logs. Mails would not appear in inbox. For solving the problem I had to use the following commands:

postconf -e 'home_mailbox = Maildir/'
postconf -e 'mailbox_command ='
/etc/init.d/postfix restart

----------------------------------------------------

+TXT Records (Dec. 15, 2019, 2:51 a.m.)

Create an account in https://www.agari.com, and using the instructions create DMARC DNS records.

You need to create TXT record like this:
Host Name: _dmarc.mohsenhassani.com
Destination: <The values the agari.com site gives you> (without the double quotations)


Description:

DMARC stands for “Domain-based Message Authentication, Reporting & Conformance”, is an email authentication, policy, and reporting protocol. It builds on the widely deployed SPF and DKIM protocols, adding linkage to the author (“From:”) domain name, published policies for recipient handling of authentication failures, and reporting from receivers to senders, to improve and monitor protection of the domain from fraudulent email.

-----------------------------------------------------------

Creating an SPF or Caller ID record:

Create a TXT record:
Host Name: mail.mohsenhassani.com
Destination: v=spf1 mx ip4:185.94.96.67 -all

-----------------------------------------------------------

+Test your Reverse PTR record (April 8, 2019, 2:51 a.m.)

http://mxtoolbox.com/ReverseLookup.aspx

+Is your domain's SPF record correct? (Dec. 15, 2018, 2:50 a.m.)

https://www.kitterman.com/spf/validate.html

+Is your domain's DKIM record correct? (Dec. 15, 2018, 2:50 a.m.)

http://www.dkim.org/

+Check your server IP is not on any email blacklists (Dec. 15, 2018, 2:48 a.m.)

whatismyipaddress.com/blacklist-check

+Description (Aug. 22, 2014, 9:49 a.m.)

Debian Mail Server Setup with Postfix + Dovecot + SASL

Postfix is an attempt to provide an alternative to the widely-used Sendmail program. Postfix attempts to be fast, easy to administer, and hopefully secure, while at the same time being sendmail compatible enough to not upset your users.

Dovecot is an open source IMAP and POP3 server for Linux/UNIX-like systems, written with security primarily in mind. Dovecot is an excellent choice for both small and large installations. It’s fast, simple to set up, requires no special administration and it uses very little memory.

When sending mail, the Postfix SMTP client can look up the remote SMTP server hostname or destination domain (the address right-hand part) in a SASL password table, and if a username/password is found, it will use that username and password to authenticate to the remote SMTP server. And as of version 2.3, Postfix can be configured to search its SASL password table by the sender email address.

Note : If you install Postfix/Dovecot mail server you will ONLY be able to send mail within your network. You can only send mail externally if you install SASL authentication with TLS. As otherwise you get “Relay Access Denied” error.

SASL Configuration + TLS (Simple authentication security layer with transport layer security) used mainly to authenticate users before sending email to external server, thus restricting relay access. If your relay server is kept open, then spammers could use your mail server to send spam. It is very essential to protect your mail server from misuse.

Misc
+Western Digital HDD Colors (June 29, 2020, 10:14 a.m.)

WD Blue

برای کامپیوترهای خانگی و کارهای روزمره مناسب هستند

The WD HDD Blue is very reliable and is available as an internal HDD or as an SSD. Blue is made for general use; for people that use their computers every day for activities like work, internet browsing, and casual gaming.

The revolution per minute varies from one model to another; there are models that have 5400 rpm and others that have 7200 rpm. WD Blue is suitable for storing anything that one needs including music, games, and pictures.

--------------------------------------------------------------------------

WD Green

به علت سرعت کمی که دارند برای کارهای محاسباتی توصیه نمی‌شود.
این هارد برای کسانی که اهل آرشیو کردن اسناد مختلف هستند مناسب است و برای ذخیره اطلاعات در حجم بالا استفاده می‌شود.

WD Green performed a similar role as WD Blue – so much so that they started to merge the two lines a few years ago – and people tended to prefer the Blue model.

Western Digital stopped manufacturing the original WD Green a while back and replaced it with a SATA SSD. You can no longer get Green as an HDD.

--------------------------------------------------------------------------

WD Black

برای کارهای محاسباتی و سرعتی مثل بازی‌ها مناسب هستند.
در هاردهای مشکی از دو پردازنده استفاد می‌شود.

WD Black offers maximum performance because of the capacity of its drive that ranges from 1TB to 6TB.

WD Black is suitable for PCs that are used in a workstation or for gamers because their performance and usability are very high. The cache of WD Black is 128MB compared to WD Blue that has 64MB cache.

All models of WD Black versions have a five-year warranty and 7200rpm. This is a high-performance HDD and, whilst it is available as an SSD, the majority of people buy Western Digital Black for an internal hard drive, or as an external HDD.

--------------------------------------------------------------------------

WD Red

به علت توانایی کار کردن شبانه روزی بدون عیب و ایراد برای استفاده در شبکه طراحی شده است.

WD Red and WD Red Pro are designed to be slotted inside the Network Assisted Storage (NAS). WD Red is only compatible with the NAS systems.

The capacities of the WD Red range from 1TB to a massive 14TB. They also get support from the RAID configuration.

The WD Reds excel more when used in reading performance than the write performance. Red can be picked up as an SSD, but people mainly opt for the HDD versions as it’s designed primarily to be a high performing NAS storage solution. This helps to increase efficiency and productivity in business computer systems.

--------------------------------------------------------------------------

WD Purple

برای دستگاه های DVR و NVR (سیستم‌های حفاظتی مثل دوربین‌های مدار بسته) استفاده می‌شوند.

WD Purple is the best surveillance drive for CCTV systems at home or in business workplaces. The Purple HDD color from WD can sustain its performance 24/7 and is capable of supporting up to 64 HD cameras at the same time.

It is mainly used by the security systems to record and store videos. It is the exact opposite of WD Red because it excels well in writing performance than reading performance. WD Purple utilizes a technology called ALL FRAME that helps in minimizing errors while recording and saving videos. The storage capacities range from 1TB to 14TB.

Purple is also available as SD & microSD cards for on-the-go surveillance like dash cams, body cams, drones, and more.

--------------------------------------------------------------------------

WD Gold

برای مرکز داده یا دیتا سنتر مناسب می‌باشد.
از دو پردازنده نیز استفاده می‌کنند

WD Gold is mainly used in enterprise-class storage systems and data centers. It often works well when used in servers, and can handle a lot of sophisticated systems simultaneously. This is the most expensive color HDD from WD (per terabyte) and is available from 1TB to 14TB.

This HDD color is one of the most reliable HDDs in the world; it can handle a staggering 2.5 million hours MTBF (Mean Time Between Failures).

WD Gold features technology such as a multi-axis shock sensor, stable track, and dynamic fly height. It comes with a warranty of five years.

--------------------------------------------------------------------------

+Profiling (April 13, 2020, 8:41 p.m.)

Profiling is a process of measurement metrics of your project, such as server response time, CPU usage, memory usage, etc.

+Firefox - Addons (Feb. 23, 2020, 12:28 p.m.)

YouTube Video Downloader 1-Click Group

https://addons.mozilla.org/en-US/firefox/addon/youtube-download-hd-download/


https://addons.mozilla.org/en-US/firefox/addon/youtube_downloader/

------------------------------------------------------------------------------

+NextCloud Server (Feb. 8, 2020, 5:04 p.m.)

1- Install all the dependencies:
apt install apache2 libapache2-mod-php mariadb-server php-xml php-cli php-cgi php-mysql php-mbstring php-gd php-curl php-zip



2- Restart Apache to make sure that it's using the PHP module:
systemctl restart apache2



3- Nextcloud keeps track of everything in a database. Plus, like most web applications, it stores its own information and settings in it too.
run the built-in secure installation script to remove junk and set up your admin account.

sudo mysql_secure_installation

Follow the instructions, and set up a new root password when asked. You can accept the defaults for everything.



4- Sign in to MariaDB using the root password that you just established:
mysql -u root -p
CREATE DATABASE nextcloud;
CREATE USER 'nextclouduser'@'localhost' IDENTIFIED BY 'yourpassword';
GRANT ALL ON nextcloud.* TO 'nextclouduser'@'localhost';
FLUSH PRIVILEGES;
\q



5- Download Nextcloud from the following link:
wget https://download.nextcloud.com/server/releases/nextcloud-18.0.0.zip
unzip nextcloud-*.zip
cp -r nextcloud /var/www/html/nextcloud
chown -R www-data:www-data /var/www/html/nextcloud



6- Open your browser, and navigate to your Nextcloud server:
http://<ip_address>/nextcloud:
You'll arrive on the Nextcloud setup page. Enter a username and password for your admin user.
Next, scroll down, and enter the information for the database that you set up, including the username and password of the user you created to manage it.

+AHCI vs. IDE vs. RAID (Feb. 7, 2020, 2:04 p.m.)

IDE, AHCI, and RAID are all operating modes in SATA environments. Each has its relative strengths and weakness.

IDE and AHCI are peripheral component interconnect (PCI) devices that move data between system memory and SATA controllers. Both add more advanced storage features.

AHCI is newer than IDE and enables more advanced storage features. However, both are older technologies that are not in widespread usage in storage arrays, especially with the growth of SSDs.

RAID is hardware or software that provides redundancy in multiple device environments, and accelerates HDDs. Like AHCI and IDE, RAID supports SATA controllers, and many RAID products enable AHCI upon installation to provide advanced storage features for single-disk applications.


In practice, the technologies are viewed as such:
- IDE is largely an obsolete technology, used only in older scenarios.
- AHCI still acts as a bus in some older SATA HDD arrays and hybrid arrays.
- RAID is still widely deployed for HDD and hybrid array data protection and redundancy.

------------------------------------------------------------------------

What is AHCI?

Advanced Host Controller Interface (AHCI) is an Intel computer standard that is limited to Intel chipsets. AHCI has been around since 2004, where it replaced the older IDE/Parallel ATA interface in new devices.

AHCI is not identical to SATA but acts as the bus between the host and AHCI or SATA controllers on the motherboard. The protocol improves storage management features on the SATA controller by enabling Native Command Queuing (NCQ) and hot-swapping.

------------------------------------------------------------------------

What is IDE?

Integrated Drive Electronics (IDE) is older than AHCI. It specifies a computer interface that connects disk storage with the motherboard bus. In 1986, Western Digital released the IDE spec in partnership with Compaq and Control Data Corp.

At the time, IDE-supported ATA drives were much faster than standard SCSI drives, and the market widely deployed the new IDE platforms. Also called parallel ATA, or PATA, IDE interconnects transfer 16 bits at a time across two device connections per channel.

------------------------------------------------------------------------

What is RAID?

RAID, or “redundant array of independent disk” is another mature technology but is widely deployed in storage environments.

RAID provides high availability and data protection across multiple nodes, which enables HDDs and SSDs to keep running after the loss of a device. RAID is available for SSD arrays. But since it does not accelerate SSD performance, all-flash arrays are likelier to use proprietary RAID that provides redundancy and accelerate performance on SSDs.


The most widely used RAID types, or levels, are 0, 1, 5, 6, and 10. There are also SSD-specific RAID options in the market.

Raid 0: Striping. Splits files and stripes the data across two disks or more, treating the striped disks as a single partition.

RAID 1: Mirroring. Copies protected disk to 2nd disk. If the mirrored disk fails, the functioning disk takes over.

RAID 5: Striping with Parity. Distributes striping and parity (raw binary data containing data values) at a block level.

RAID 6: Striping with double parity. Like RAID 5, but with a minimum of 4 disks.

RAID 10: Striping and Mirroring. Stripes across at least 4 disks for higher performance, and mirrors for redundancy.

SSDs can use traditional RAID levels. However, although RAID can improve performance on HDDs, SSDs native high speeds do not benefit from RAID speed enhancements. SSD vendors are concentrating on adding proprietary RAID functions for all-flash array.

------------------------------------------------------------------------

+HTTP Status Codes (Jan. 15, 2020, 4:18 p.m.)

https://httpstatuses.com/


1×× Informational
100 Continue
101 Switching Protocols
102 Processing

2×× Success
200 OK
201 Created
202 Accepted
203 Non-authoritative Information
204 No Content
205 Reset Content
206 Partial Content
207 Multi-Status
208 Already Reported
226 IM Used

3×× Redirection
300 Multiple Choices
301 Moved Permanently
302 Found
303 See Other
304 Not Modified
305 Use Proxy
307 Temporary Redirect
308 Permanent Redirect

4×× Client Error
400 Bad Request
401 Unauthorized
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not Acceptable
407 Proxy Authentication Required
408 Request Timeout
409 Conflict
410 Gone
411 Length Required
412 Precondition Failed
413 Payload Too Large
414 Request-URI Too Long
415 Unsupported Media Type
416 Requested Range Not Satisfiable
417 Expectation Failed
418 I'm a teapot
421 Misdirected Request
422 Unprocessable Entity
423 Locked
424 Failed Dependency
426 Upgrade Required
428 Precondition Required
429 Too Many Requests
431 Request Header Fields Too Large
444 Connection Closed Without Response
451 Unavailable For Legal Reasons
499 Client Closed Request

5×× Server Error
500 Internal Server Error
501 Not Implemented
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported
506 Variant Also Negotiates
507 Insufficient Storage
508 Loop Detected
510 Not Extended
511 Network Authentication Required
599 Network Connect Timeout Error

+Telegram Font Problem (Sept. 10, 2019, 12:13 p.m.)

1- Download a TTF font:
https://github.com/rastikerdar/vazir-font/tree/master/dist


2- Create a directory and copy the font to it:
Make sure to rename the file to small case letters.
mkdir ~/.fonts/


3- Edit the Telegram font config file:
vim ~/.local/share/TelegramDesktop/tdata/fc-custom-1.conf

Add this block after all the <match> tags:
<match target="pattern">
<test qual="any" name="family">
<string>Vazir</string>
</test>
<edit name="family" mode="assign" binding="same">
<string>Vazir</string>
</edit>
</match>

+CAPTCHA (Oct. 14, 2018, 9:39 a.m.)

CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart.

This is a challenging test to differentiate between humans and automated bots based on the response. reCAPTCHA is one of the CAPTCHA spam protection services bought by Google. Now it is being offered for free to webmasters and Google also uses the reCAPTCHA on it’s own services like Google Search.

-------------------------------------------------------------

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

+List of administrative divisions by country (Sept. 15, 2018, 12:57 p.m.)

https://en.wikipedia.org/wiki/List_of_administrative_divisions_by_country

+Accuracy of latitude and longitude (July 5, 2018, 8:39 a.m.)

1 10 kilometers 6.2 miles
2 1 kilometer 0.62 miles
3 100 meters About 328 feet
4 10 meters About 33 feet
5 1 meter About 3 feet
6 10 centimeters About 4 inches
7 1.0 centimeter About 1/2 an inch
8 1.0 millimeter The width of paperclip wire.
9 0.1 millimeter The width of a strand of hair.
10 10 microns A speck of pollen.
11 1.0 micron A piece of cigarette smoke.
12 0.1 micron You're doing virus-level mapping at this point.
13 10 nanometers Does it matter how big this is?
14 1.0 nanometer Your fingernail grows about this far in one second.
15 0.1 nanometer An atom. An atom! What are you mapping?

+Exporting an object as svg from inkscape (May 11, 2019, 1:42 a.m.)

A straight-forward method is the following:

Select the object(s) to export.
"Resize page to drawing or selection" with Ctrl+Shift+R.
"Invert selection" with !, and Del all other objects.
"Save As" with Ctrl+Shift+S.
Select Optimized SVG as the format if you want to use it on the web.

+Firefox - DownThemAll addon - exclude 128 MP3s (May 9, 2017, 3:49 p.m.)

/[^128]...\.mp3$/,1080,Full HD,HQ

/\/[^\/\?128]+\.mp3$/,mp3

/\/[^\/\?128]+\.mp3$/,320,720p,Full HD,HQ

+Firefox - Disable Auto Refresh (May 7, 2017, 5:18 p.m.)

about:config
accessibility.blockautorefresh

+Serial Numbers (June 7, 2016, 10:50 a.m.)

VMware Workstation 12:
CA5MH-6YF0K-480WQ-8YM5V-XLKV4
-------------------------------------------------------------------
PyCharm + IntelliJ IDEA
For any change and update, follow up the comments on this website: http://us.idea.lanyus.com/

2016.1
https://бэкдор.рф/pycharm-activate-key-3-4-5-2016/

2016.2
http://jetbrains.tencent.click/
-------------------------------------------------------------------
iLO:
34T6L-4C9PX-X8D9C-GYD26-8SQWM
-------------------------------------------------------------------

+Firefox - A script on this page may be busy, or it may have stopped responding... (April 15, 2015, 4:08 p.m.)

In the Location bar, type about:config and press Enter.
Click I'll be careful, I promise! to continue to the about:config page.
In the about:config page, search for the preference dom.max_script_run_time, and double-click on it.
In the Enter integer value prompt, type 20.
Press OK.

+Web Proxies (Feb. 9, 2015, 1:17 p.m.)

http://buka.link/

MongoDB
+Export data (Oct. 26, 2019, 3:46 p.m.)

Export a Collection to a JSON File:

mongoexport --db mydb --collection posts --pretty --jsonArray --out posts.json

---------------------------------------------------------------------

Export a Collection to a CSV File:

mongoexport --db music --collection artists --type=csv --fields _id,artistname --out /data/dump/music/artists.csv

---------------------------------------------------------------------

Export the results of a Query:

mongoexport --db music --collection artists --query '{"artistname": "Miles Davis"}' --out /data/dump/music/miles_davis.json

---------------------------------------------------------------------

The --limit Option

mongoexport --db music --collection artists --limit 3 --out /data/dump/music/3_artists.json

---------------------------------------------------------------------

+Commands (Oct. 26, 2019, 2:07 p.m.)

User the command "mongo" to access MongoDB shell, then use the following commands.

=========================================

Create Database:

use mydb
db.students.insert({ id: 1 })

-----------------------------------------------------------------------

Show Databases:

show dbs

-----------------------------------------------------------------------

db.adminCommand( { listDatabases: 1 } )

The value (e.g. 1) does not affect the output of the command.

------------------------------------------------------------------------

db.adminCommand( { listDatabases: 1, nameOnly: true} )

db.adminCommand( { listDatabases: 1, filter: { "name": /^rep/ } } )

------------------------------------------------------------------------

Display current selected database:
db

------------------------------------------------------------------------

Delete Database:

use mydb
db.dropDatabase()

------------------------------------------------------------------------

Copy Database:
db.copyDatabase(fromdb, todb, fromhost, username, password, mechanism)

Example:
db.copyDatabase("olddb", "newdb")

You can use above option to rename database in MongoDB.

------------------------------------------------------------------------

Copy Database from Remote Instance

db.copyDatabase("remote_dbname", "local_dbname", "10.8.0.2", "username", "password")

------------------------------------------------------------------------

Create Collection:

use mydb
db.createCollection(name, options)
db.createCollection("NAME")

------------------------------------------------------------------------

Show Collection:
use mydb
show collections

------------------------------------------------------------------------

Rename Collection:
db.collection.renameCollection(target, dropTarget)

db.pproducts.renameCollection("products")

------------------------------------------------------------------------

Drop Collection:

db.COLLECTION_NAME.drop()

db.students.drop();

db.getCollection("students").drop();

------------------------------------------------------------------------

Insert Document:

db.COLLECTION_NAME.insert(document)

db.students.insert({
"id": 1001,
"username": "mohsen.hassani",
"name": [
{"first_name": "Mohsen"},
{"middle_name": ""},
{"last_name": "Hassani"}
],
"email": "mohsen@mohsenhassani.com",
"designation": "DevOps & Web Developer",
"location": "Tehran/Iran"
})

------------------------------------------------------------------------

Insert Multiple Documents:

var students = [{}]

Pass a list of comma separated dictionaries (a JSON)

db.students.insert(students);

------------------------------------------------------------------------

Query Document:

db.COLLECTION_NAME.find(condition)


Get all the availabel documents in collection:
db.students.find()

db.students.find().pretty();

------------------------------------------------------------------------

Search Specific Documents:

db.students.find({"id": 1001})

------------------------------------------------------------------------

Update Document:

db.students.update(CONDITION, UPDATED DATA, OPTIONS)

db.students.update({"id": 1001}, {$set: {'location': 'Australia'}})

------------------------------------------------------------------------

Delete Document:

db.colloction.remove(CONDITION)



Delete Matching Document:

db.students.remove({"username": "mohsen.hassani"})
Remove all document having username mohsen.hassani.


To remove only first matching document from collection:
db.students.remove({"username": "mohsen.hassani"}, 1)


Delete All Documents in Collection:
db.students.remove({})

------------------------------------------------------------------------

limit() Method:

Use limit() method to show the limited number of documents in a collection with find() method.

db.COLLECTION_NAME.find().limit(NUMBER)

db.students.find().limit(2);

------------------------------------------------------------------------

sort() method:

db.COLLECTION_NAME.find().sort({KEY:type})

Ascending Order:
db.students.find().sort({username:1});


Descending Order:
db.students.find().sort({username:1});

------------------------------------------------------------------------

count() Method:

db.COLLECTION_NAME.count(query)

db.students.count();


Count with Find:
db.students.find({"username": "mohsen.hassani"}).count()

------------------------------------------------------------------------

+Terminology (Oct. 26, 2019, 2:02 p.m.)

RDBMS MongoDB

Database Database
Table Collection
Tuple/Row Document
column Field
Table Join Embedded Documents
Primary Key Primary Key (Default key _id provided by mongodb itself)


Database Server and Client
Mysqld/Oracle mongod
mysql/sqlplus mongo

+Service & Logs (Oct. 26, 2019, 11:15 a.m.)

sudo service mongod start

------------------------------------------------------------

/var/log/mongodb/mongod.log
[initandlisten] waiting for connections on port 27017

27017 is the default port the standalone mongod listens on.

------------------------------------------------------------

+Installation (Oct. 26, 2019, 11:14 a.m.)

https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/

----------------------------------------------------------

The unofficial mongodb package provided by Debian is not maintained by MongoDB and conflict with MongoDB’s offically supported packages. Use the official MongoDB mongodb-org packages, which are kept up-to-date with the most recent major and minor MongoDB releases.

To check if Debian’s mongodb package is installed on the system, run sudo apt list --installed | grep mongodb. You can use sudo apt remove mongodb and sudo apt purge mongodb to remove and purge the mongodb package before attempting this procedure.

----------------------------------------------------------

Installation:

1- wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -

2- echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list

3- apt update

4- apt install mongodb-org

----------------------------------------------------------

MySQL
+Increase the packet size (July 10, 2020, 2:41 a.m.)

This can be set on your server as it's running:

set global max_allowed_packet=104857600;

This sets it to 100MB.

----------------------------------------------------------

Edit the file /etc/my.cnf:

[mysqld]
max_allowed_packet=16M

----------------------------------------------------------

+Rename column (Jan. 3, 2020, 7:09 p.m.)

ALTER TABLE tableName CHANGE `oldcolname` `newcolname` datatype(length);


ALTER TABLE table_name RENAME COLUMN old_col_name TO new_col_name;

+Statements (Nov. 6, 2019, 12:47 p.m.)

Add Columns to a Table:

ALTER TABLE table
ADD [COLUMN] column_name column_definition [FIRST|AFTER existing_column];


ALTER TABLE vendors
ADD COLUMN phone VARCHAR(15) AFTER name;


ALTER TABLE vendors
ADD COLUMN vendor_group INT NOT NULL;


ALTER TABLE vendors
ADD COLUMN email VARCHAR(100) NOT NULL,
ADD COLUMN hourly_rate decimal(10,2) NOT NULL;

----------------------------------------------------------------------

+Access denied with non-root user (July 14, 2019, 11:30 a.m.)

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '123456';

+Recover root password (April 15, 2018, 5:25 p.m.)

1- /etc/init.d/mysql stop


2- Using the following command find the processes which use mysql and (kill -9 pid) to stop them:
ps aux | grep mysql


3- /usr/sbin/mysqld --skip-grant-tables --skip-networking &


4- mysql -u root


5- FLUSH PRIVILEGES;


6-
Reset/update your password:
SET PASSWORD FOR root@'localhost' = PASSWORD('password');

If you have a mysql root account that can connect from everywhere, you should also do:
UPDATE mysql.user SET Password=PASSWORD('newpwd') WHERE User='root';

And if you have a root account that can access from everywhere:
USE mysql
UPDATE user SET Password = PASSWORD('newpwd')
WHERE Host = '%' AND User = 'root';


7- FLUSH PRIVILEGES;


8-/etc/init.d/mysql start

+Error - Access denied for user 'test'@'localho