124 Notes
+ Create SSH Keys (Nov. 7, 2022, 12:25 p.m.)

from io import StringIO import paramiko rsa_key = paramiko.RSAKey.generate(bits=4096) private_string = StringIO() rsa_key.write_private_key(private_string) public_key = rsa_key.get_base64() print(public_key, private_string.getvalue())

+ venv (July 12, 2022, 11:08 a.m.)

python3.9 -m venv venv source venv/bin/activate (venv)$ pip install -r requirements.txt

+ Ellipsis or Three dots(…) (March 20, 2022, 1:19 p.m.)

Ellipsis is a Python Object. It has no Methods. It is a singleton Object i.e, provides easy access to single instances. Various Use Cases of Ellipsis: - Default Secondary Prompt in Python interpreter - Accessing and slicing multidimensional Arrays/NumPy - In type hinting Callable[…, str] - Used as Pass Statement inside Functions

+ Walrus operator := (March 5, 2022, 11 a.m.)

The name of the operator comes from the fact that it resembles the eyes and tusks of a walrus on its side. The walrus operator creates an assignment expression. The operator allows us to assign a value to a variable inside a Python expression. It is a convenient operator which makes our code more compact. print(is_new := True) --------------------------------------------------------------------------------- We can assign and print a variable in one go. is_new = True print(is_new) Without the walrus operator, we have to create two lines. --------------------------------------------------------------------------------- Python walrus read input: In the following example, we use the walrus operator in a while loop. #!/usr/bin/env python words = [] while (word := input("Enter word: ")) != "quit": words.append(word) print(words) --------------------------------------------------------------------------------- Python walrus with if condition: Suppose that all our words must have at least three characters. #!/usr/bin/env python words = ['falcon', 'sky', 'ab', 'water', 'a', 'forest'] for word in words: if ((n := len(word)) < 3): print(f'warning, the word {word} has {n} characters') In the example, we use the walrus operator to test the length of a word. If a word has less than three characters, a warning is issued. We determine and assign the length of a word in one shot. ---------------------------------------------------------------------------------

+ Descriptor (Feb. 20, 2022, 11:21 a.m.)

Descriptors are a way of controlling the attribute access of an object. Actually, the nice thing about them is, that you remove the responsibility of setting and retrieving attributes from the class and give this responsibility to another class that only has this one purpose. So it helps you follow the SRP principle, yay! You know the @property decorator, which basically does the same as a descriptor (probably it is implemented by using a descriptor). class IsBetween: def __init__(self, min_value, max_value, below_exception=ValueError(), above_exception=ValueError()): self.min_value = min_value self.max_value = max_value self.below_exception = below_exception self.above_exception = above_exception def __set_name__(self, owner, name): self.private_name = '_' + name self.public_name = name def __set__(self, obj, value): if value < self.min_value: raise self.below_exception if value > self.max_value: raise self.above_exception setattr(obj, self.private_name, value) def __get__(self, obj, objtype=None): return getattr(obj, self.private_name) Class Car: fuel_amount = IsBetween(0, 60, ValueError(), ValueError()) def __init__(self): self.fuel_amount = 0

+ Convert Arabic letters to Farsi (Feb. 4, 2022, 3:54 p.m.)

first_name = first_name.replace('ي', 'ی').replace('ك', 'ک').replace('ئی', 'یی')

+ IP Address (Jan. 21, 2022, 4:09 p.m.)

start_ip = ipaddress.IPv4Address('') end_ip = ipaddress.IPv4Address('') for ip_int in range(int(start_ip), int(end_ip)): print(ipaddress.IPv4Address(ip_int)) For python2, use u' ' for Unicode in IPv4Address() ------------------------------------------------------------------------- IPv4Address('') + 3 IPv4Address('') - 3 list(ip_network('').hosts()) -------------------------------------------------------------------------

+ multiprocessing (July 30, 2021, 4:15 p.m.)

import multiprocessing def create_file(file_name): open(file_name, 'w') def create_files(file_names, processes): pool = multiprocessing.Pool(processes) for _ in pool.imap_unordered(create_file, file_names): pass create_files( ['a.txt', 'b.txt', 'c.txt', 'd.txt', 'e.txt', 'f.txt', 'g.txt', 'h.txt'], 3 )

+ Usage of %r and %s (June 20, 2021, 12:22 p.m.)

In Python, there are two built-in functions for turning an object into a string: str and repr. str is supposed to be a friendly, human-readable string. repr is supposed to include detailed information about an object's contents. Sometimes, they'll return the same thing, such as for integers. The %s specifier converts the object using str(), and %r converts it using repr(). Here's an example, using a date: >>> import datetime >>> d = >>> str(d) '2011-05-14' >>> repr(d) ', 5, 14)'

+ Symmetric/Asymmetric Cryptography (June 17, 2021, 1:55 p.m.)

Symmetric Cryptography: In this type, the encryption and decryption process uses the same key. It is also called as secret-key cryptography. The main features of symmetric cryptography are as follows: - It is simpler and faster. - The two parties exchange the key in a secure way. Drawback: The major drawback of symmetric cryptography is that if the key is leaked to the intruder, the message can be easily changed and this is considered as a risk factor. Data Encryption Standard (DES): The most popular symmetric key algorithm is Data Encryption Standard (DES) and Python includes a package that includes the logic behind the DES algorithm. pip install pyDES ----------------------------------------------------------------------- Asymmetric Cryptography It is also called as public-key cryptography. It works in the reverse way of symmetric cryptography. This implies that it requires two keys: one for encryption and the other for decryption. The public key is used for encrypting and the private key is used for decrypting. Drawbacks: - Due to its key length, it contributes to lower encryption speed. - Key management is crucial. -----------------------------------------------------------------------

+ Generate random password (June 16, 2021, 5:11 p.m.)

import random import string password_characters = string.ascii_letters + string.digits + string.punctuation password = ''.join([random.choice(password_characters) for _ in range(32)])

+ PEP 8 - Style Guide for Python Code (May 17, 2021, 10:09 a.m.)

+ Namespace Packages (April 21, 2021, 1:33 p.m.)

Python versions 3.3 and later support Python packages without an file. These packages are known as “namespace packages” and may be spread across multiple directories at different locations on sys.path.

+ Assertion and assert (Feb. 2, 2021, 9:38 a.m.)

What is Assertion? Assertions are statements that assert or state a fact confidently in your program. For example, while writing a division function, you're confident the divisor shouldn't be zero, you assert divisor is not equal to zero. Assertions are simply boolean expressions that check if the conditions return true or not. If it is true, the program does nothing and moves to the next line of code. However, if it's false, the program stops and throws an error. It is also a debugging tool as it brings the program to a halt as soon as any error is occurred and shows at which point the program error has occurred. -------------------------------------------------------------------- Test if a condition returns True: x = "hello" # If condition returns True, then nothing happens: assert x == "hello" # If condition returns False, AssertionError is raised: assert x == "goodbye" --------------------------------------------------------------------

+ Unit Tests (Jan. 29, 2021, 11:03 a.m.)

Unit Testing is the first level of software testing where the smallest testable parts of a software are tested. This is used to validate that each unit of the software performs as designed. Unittest is the batteries-included test module in the Python standard library. Its API will be familiar to anyone who has used any of the JUnit/nUnit/CppUnit series of tools. ------------------------------------------------------------------------ Unit tests, by definition, examine each unit of your code separately. But when your application is run for real, all those units have to work together, and the whole is more complex and subtle than the sum of its independently-tested parts. ------------------------------------------------------------------------

+ Exploratory Testing (Jan. 29, 2021, 11:01 a.m.)

Exploratory testing is a form of testing that is done without a plan. In an exploratory test, you’re just exploring the application. To have a complete set of manual tests, all you need to do is make a list of all the features your application has, the different types of input it can accept, and the expected results. Now, every time you make a change to your code, you need to go through every single item on that list and check it.

+ Convert bytes to Kilobytes, Megabytes, Gigabytes and Terabytes (Jan. 13, 2021, 6:32 p.m.)

def convert_bytes(bytes_number): tags = ["Byte", "Kilobyte", "Megabyte", "Gigabyte", "Terabyte"] i = 0 double_bytes = bytes_number while i < len(tags) and bytes_number >= 1024: double_bytes = bytes_number / 1024.0 i = i + 1 bytes_number = bytes_number / 1024 return str(round(double_bytes, 2)) + " " + tags[i] -------------------------------------------------------- print(convert_bytes(4896587482345)) print(convert_bytes(9876524362)) print(convert_bytes(10248000)) print(convert_bytes(1048576)) print(convert_bytes(1024000)) print(convert_bytes(475445)) print(convert_bytes(1024)) print(convert_bytes(75)) print(convert_bytes(0)) Output: 4.45 Terabyte 9.2 Gigabyte 9.77 Megabyte 1.0 Megabyte 1000.0 Kilobyte 464.3 Kilobyte 1.0 Kilobyte 75 Byte 0 Byte

+ self vs cls - Class methods (Jan. 1, 2021, 5:02 p.m.)

The difference between the keywords self and cls reside only in the method type. If the created method is an instance method then the reserved word self has to be used, but if the method is a class method then the keyword cls must be used. Finally, if the method is a static method then none of those words will be used because static methods are self-contained and do not have access to the instance or class variables nor to the instance or class methods.

+ Three different method types (Jan. 1, 2021, 5:05 p.m.)

In Python there are three different method types. The static method, the class method, and the instance method. -------------------------------------------------------------- Static methods A static method in python must be created by decorating it with @staticmethod in order to let python now that the method should be static. The main characteristic of a static method is that they can be called without instantiating the class. These methods are self-contained, meaning that they cannot access any other attribute or call any other method within that class. -------------------------------------------------------------- Class method Methods have to be created with the decorator @classmethod, and these methods share a characteristic with the static methods in that they can be called without having an instance of the class. The difference relies on the capability to access other methods and class attributes but no instance attributes. -------------------------------------------------------------- Instance Methods This method can only be called if the class has been instantiated. Once an object of that class has been created, the instance method can be called and can access all the attributes of that class through the reserved word self. An instance method is capable of creating, getting, and setting new instance attributes and calling other instance, class, and static methods. --------------------------------------------------------------

+ Poetry (Oct. 15, 2020, 11:16 a.m.) ------------------------------------------------------------------------- Installing / Removing packages: poetry shell (Activate the environment) poetry add pdfkit (Adds the package to the pyproject.toml and installs the latest version) poetry remove pdfkit (Removes the package along with its dependencies) poetry show --tree (Lists packages with their dependencies in a tree structure) ------------------------------------------------------------------------- Install poetry isolated from the rest of your system: curl -sSL | python ------------------------------------------------------------------------- Updating poetry: poetry self update ------------------------------------------------------------------------- Create a Python Project with Poetry: 1- poetry new --name my-project --src my_project 2- cd my-project 3- poetry env use python3 ------------------------------------------------------------------------- Activate an environment: poetry shell If it didn't activate the shell (did not show the environment name at the beginning of the prompt), use the following command: source $(poetry env info --path)/bin/activate ------------------------------------------------------------------------- Add dependency: poetry add django (django will be added to the pyproject.toml file) --dev (-D): Add package as a development dependency. ------------------------------------------------------------------------- If you don’t like that poetry initializes a project for you or if you already have a project that you want to control with poetry, you can use the init command. You will get an interactive shell to configure your project. poetry init ------------------------------------------------------------------------- If we want to install a development dependency, i.e not related directly to your project, like pytest, we can do so using the -D option. poetry add -D pytest ------------------------------------------------------------------------- List dependencies: poetry show --tree poetry show --latest ------------------------------------------------------------------------- Install the project poetry install ------------------------------------------------------------------------- Display virtual environment: poetry env info poetry env info --path poetry env list --full-path ------------------------------------------------------------------------- Update packages: poetry update poetry update package1 package2 ------------------------------------------------------------------------- Remove a package: poetry remove requests If it is a development package we must pass the -D option to the command: poetry remove -D pytest ------------------------------------------------------------------------- Remove an environment: poetry env remove notes2-1yf80iP--py3.8 ------------------------------------------------------------------------- Setup Django with poetry 1- mkdir django_project 2- cd django_project 3- poetry init --no-interaction --dependency Django 4- vim pyproject.toml (Change the python version) 5- poetry env use python3.8 6- poetry shell 7- poetry install poetry run django-admin startproject project poetry run python migrate ------------------------------------------------------------------------- Export: This command exports the lock file to other formats. poetry export -f requirements.txt --output requirements.txt Only the requirements.txt format is currently supported. --dev: Include development dependencies. ------------------------------------------------------------------------- Lock This command locks (without installing) the dependencies specified in pyproject.toml. By default, this will lock all dependencies to the latest available compatible versions. To only refresh the lock file, use the --no-update option. poetry lock ------------------------------------------------------------------------- Dependency specification: Caret requirements: ^1.2.3 >=1.2.3 <2.0.0 ^1.2 >=1.2.0 <2.0.0 ^1 >=1.0.0 <2.0.0 ^0.2.3 >=0.2.3 <0.3.0 ^0.0.3 >=0.0.3 <0.0.4 ^0.0 >=0.0.0 <0.1.0 ^0 >=0.0.0 <1.0.0 Tilde requirements: ~1.2.3 >=1.2.3 <1.3.0 ~1.2 >=1.2.0 <1.3.0 ~1 >=1.0.0 <2.0.0 Wildcard requirements: * >=0.0.0 1.* >=1.0.0 <2.0.0 1.2.* >=1.2.0 <1.3.0 Inequality requirements: >= 1.2.0 > 1 < 2 != 1.2.3 Multiple requirements: Multiple version requirements can also be separated with a comma, e.g. >= 1.2, < 1.5. Exact requirements: ==1.2.3 @ operator: When adding dependencies via poetry add, you can use the @ operator. This is understood similarly to the == syntax, but also allows prefixing any specifiers that are valid in pyproject.toml. poetry add django@^4.0.0 The above would translate to the following entry in pyproject.toml: Django = "^4.0.0" The special keyword latest is also understood by the @ operator: poetry add django@latest Django = "^4.0.5" git dependencies: [tool.poetry.dependencies] requests = { git = "" } -------------------------------------------------------------------------

+ str() vs repr() (Sept. 3, 2020, 6:56 p.m.)

str() and repr() both are used to get a string representation of the object. The repr() function returns a printable representational string of the given object. --------------------------------------------------------------- import datetime today = # Prints readable format for date-time object print(str(today)) 2020-09-03 20:29:48.753816 # prints the official format of date-time object print(repr(today)) datetime.datetime(2020, 9, 3, 20, 29, 48, 753816) ---------------------------------------------------------------

+ __new__ and __init__ (Sept. 1, 2020, 9:21 p.m.)

Use __new__ when you need to control the creation of a new instance. Use __init__ when you need to control the initialization of a new instance. class Shape: def __new__(cls, sides, *args, **kwargs): if sides == 3: return Triangle(*args, **kwargs) else: return Square(*args, **kwargs) class Triangle: def __init__(self, base, height): self.base = base self.height = height def area(self): return (self.base * self.height) / 2 class Square: def __init__(self, length): self.length = length def area(self): return self.length*self.length a = Shape(sides=3, base=2, height=12) b = Shape(sides=4, length=2) print(str(a.__class__)) print(a.area()) print(str(b.__class__)) print(b.area())

+ Search in a list of dictionaries (Aug. 15, 2020, 11:01 p.m.)

complete_call = next(c for c in complete_call_records if c['call_id'] == queue_log.call_id)

+ Convert Pipfile to requirements.txt (Aug. 12, 2020, 9:11 a.m.)

1- Install the tool: pip install pipfile-requirements 2- Export the requirements from the Pipfile: pipfile2req Pipfile > requirements.txt

+ Pathlib (Aug. 10, 2020, 3:27 p.m.)

from pathlib import Path Path.home() /home/mohsen ---------------------------------------------------------------------- Path.cwd() /home/mohsen/Projects/ ---------------------------------------------------------------------- Path.cwd() / 'output' / 'output.xlsx' ---------------------------------------------------------------------- top_xlsx_files = Path.cwd().glob('*.xlsx') all_xlsx_files = Path.cwd().rglob('*.xlsx') ---------------------------------------------------------------------- Path.mkdir(): to create a new directory at the given path To open the file created by the path Path.rename(): Rename a file or directory to the given target Path.rmdir(): Remove the empty directory Path.unlink(): Remove the file or symbolic link ---------------------------------------------------------------------- Path('.editorconfig').write_text('# config goes here') ---------------------------------------------------------------------- path = Path('.editorconfig') with open(path, mode='wt') as config: config.write('# config goes here') ---------------------------------------------------------------------- Path.home().joinpath('python', 'scripts', '') ---------------------------------------------------------------------- with'r') as fid: ... ---------------------------------------------------------------------- path = pathlib.Path.cwd() / '' path.read_text() pathlib.Path('').read_text() ---------------------------------------------------------------------- The .resolve() method will find the full path. path = pathlib.Path('') path.resolve() ---------------------------------------------------------------------- Components of a Path: path PosixPath('/home/gahjelle/realpython/') '' path.stem 'test' path.suffix '.md' path.parent PosixPath('/home/gahjelle/realpython') path.parent.parent PosixPath('/home/gahjelle') path.anchor '/' ---------------------------------------------------------------------- Counting Files: import collections collections.Counter(p.suffix for p in pathlib.Path.cwd().iterdir()) collections.Counter(p.suffix for p in pathlib.Path.cwd().glob('*.p*')) ---------------------------------------------------------------------- Display a Directory Tree def tree(directory): print(f'+ {directory}') for path in sorted(directory.rglob('*')): depth = len(path.relative_to(directory).parts) spacer = ' ' * depth print(f'{spacer}+ {}') ----------------------------------------------------------------------

+ Collections - Counter (July 29, 2020, 2:22 p.m.)

The Counter Collections keep a count of all the elements inserted in the collection along with the keys. It is a sub-class of Dictionary and used to track the items. from collections import Counter letters = Counter('Mohsen Hassani') print(letters) >>> Counter({'s': 3, 'n': 2, 'a': 2, 'M': 1, 'o': 1, 'h': 1, 'e': 1, ' ': 1, 'H': 1, 'i': 1}) -------------------------------------------------------------------------- counter = Counter(['a', 'a', 'b']) print(counter) # Counter({'a': 2, 'b': 1}) -------------------------------------------------------------------------- counter = Counter(a=2, b=3, c=1) print(counter) # Counter({'b': 3, 'a': 2, 'c': 1}) -------------------------------------------------------------------------- elements() This method returns the list of elements in the counter. Only elements with positive counts are returned. counter = Counter({'Dog': 2, 'Cat': -1, 'Horse': 0}) elements = counter.elements() # doesn't return elements with count 0 or less for value in elements: print(value) The above code will print “Dog” two times because it’s count is 2. Other elements will be ignored because they don’t have a positive count. Counter is an unordered collection, so elements are returned in no particular order. -------------------------------------------------------------------------- most_common(n) This method returns the most common elements from the counter. If we don’t provide value of ‘n’ then sorted dictionary is returned from most common the least common elements. We can use slicing to get the least common elements on this sorted dictionary. counter = Counter({'Dog': 2, 'Cat': -1, 'Horse': 0}) # most_common() most_common_element = counter.most_common(1) print(most_common_element) # [('Dog', 2)] least_common_element = counter.most_common()[:-2:-1] print(least_common_element) # [('Cat', -1)] -------------------------------------------------------------------------- subtract() and update() Counter subtract() method is used to subtract element counts from another counter. update() method is used to add counts from another counter. counter = Counter('ababab') print(counter) # Counter({'a': 3, 'b': 3}) c = Counter('abc') print(c) # Counter({'a': 1, 'b': 1, 'c': 1}) # subtract counter.subtract(c) print(counter) # Counter({'a': 2, 'b': 2, 'c': -1}) # update counter.update(c) print(counter) # Counter({'a': 3, 'b': 3, 'c': 0}) -------------------------------------------------------------------------- Miscellaneous Operations on Python Counter Let’s look at some code snippets for miscellaneous operations we can perform on Counter objects. counter = Counter({'a': 3, 'b': 3, 'c': 0}) # miscellaneous examples print(sum(counter.values())) # 6 print(list(counter)) # ['a', 'b', 'c'] print(set(counter)) # {'a', 'b', 'c'} print(dict(counter)) # {'a': 3, 'b': 3, 'c': 0} print(counter.items()) # dict_items([('a', 3), ('b', 3), ('c', 0)]) # remove 0 or negative count elements counter = Counter(a=2, b=3, c=-1, d=0) counter = +counter print(counter) # Counter({'b': 3, 'a': 2}) # clear all elements counter.clear() print(counter) # Counter() --------------------------------------------------------------------------

+ Collections - DefaultDict (July 28, 2020, 3:53 p.m.)

Defaultdict is a sub-class of the dict class that returns a dictionary-like object. The functionality of both dictionaries and defualtdict are almost the same except for the fact that defualtdict never raises a KeyError. It provides a default value for the key that does not exist. from collections import defaultdict ages = defaultdict(int) ages['mohsen'] = 35 names = defaultdict(list) names['one'] = ['Mohsen'] names['one'].append('Mohsen 2'')

+ Collections - OrderedDict (July 28, 2020, 3:25 p.m.)

Python OrderedDict maintains the order of insertion of elements through the key-value pairs in the Dictionary. from collections import OrderedDict info = OrderedDict([ ... ('First_Name', 'Mohsen'), ... ('Last Name', 'Hassani'), ... ('Address', 'Earth') ... ]) >>> info OrderedDict([('First_Name', 'Mohsen'), ('Last Name', 'Hassani'), ('Address', 'Earth')])

+ Escape Sequences (July 28, 2020, 10:26 a.m.)

\a ASCII Bell (BEL) --------------------------------------------------------------- \b ASCII Backspace (BS) --------------------------------------------------------------- \f ASCII Formfeed (FF) --------------------------------------------------------------- \n ASCII Linefeed (LF) --------------------------------------------------------------- \r ASCII Carriage Return (CR) --------------------------------------------------------------- \t ASCII Horizontal Tab (TAB) --------------------------------------------------------------- \v ASCII Vertical Tab (VT) --------------------------------------------------------------- \ooo Character with octal value ooo --------------------------------------------------------------- \xhh Character with hex value hh ---------------------------------------------------------------

+ \n vs \r (July 28, 2020, 10:10 a.m.)

\n is the newline character, while \r is the carriage return. They differ in what uses them. Windows uses \r\n to signify the enter key was pressed, while Linux and Unix use \n to signify that the enter key was pressed. ----------------------------------------------------------------- The '\n' is the "Line Feed" and '\r' is the carriage return. Different operating systems will handle new lines in a different way, such as Windows expects a newline to be a combination of two characters, '\r\n'. Linux\Unix and Modern Mac OS uses a single '\n' for a new line. Classic Mac OS uses a single '\r' for a new line. -----------------------------------------------------------------

+ base64 (July 28, 2020, 8:22 a.m.)

base64.b64encode(b"Mohsen") base64.b64encode(bytearray("Mohsen", 'utf-8')) base64.b64decode("TW9oc2Vu") --------------------------------------------------------------------------------- name = 'Mohsen' name.encode('utf-8') >> b'Mohsen' --------------------------------------------------------------------------------- print base64.b64encode("c\xf7>") Output Y/c+ print base64.urlsafe_b64encode("c\xf7>") Output Y_c- ---------------------------------------------------------------------------------

+ Access Modifiers / Access Specifiers (July 6, 2020, 2:16 p.m.)

Access modifiers (or access specifiers) are keywords in object-oriented languages that set the accessibility of classes, methods, and other members. Access modifiers are a specific part of programming language syntax used to facilitate the encapsulation of components. In most of the object-oriented languages, access modifiers are used to limit access to the variables and functions of a class. Most of the languages use three types of access modifiers, they are - private, public, and protected. -------------------------------------------------------------------------- There are 3 types of access modifiers for a class in Python. - Access Modifier: Public: The members declared as Public are accessible from outside the Class through an object of the class. - Access Modifier: Protected: The members declared as Protected are accessible from outside the class but only in a class derived from it that is in the child or subclass. - Access Modifier: Private: These members are only accessible from within the class. No outside Access is allowed. -------------------------------------------------------------------------- Examples: - Public Access Modifier: By default, all the variables and member functions of a class are public in a python program. # defining a class Employee class Employee: # constructor def __init__(self, name, sal): = name; self.sal = sal; All the member variables of the class in the above code will be by default public, hence we can access them as follows: >>> emp = Employee("Ironman", 999000); >>> emp.sal; 999000 - Protected Access Modifier: According to Python convention adding a prefix _(single underscore) to a variable name makes it protected. Yes, no additional keyword required. # defining a class Employee class Employee: # constructor def __init__(self, name, sal): self._name = name; # protected attribute self._sal = sal; # protected attribute In the code above we have made the class variables name and sal protected by adding an _(underscore) as a prefix, so now we can access them as follows: >>> emp = Employee("Captain", 10000); >>> emp._sal; 10000 Similarly, if there is a child class extending the class Employee then it can also access the protected member variables of the class Employee. Let's have an example: # defining a child class class HR(Employee): # member function task def task(self): print ("We manage Employees") Now let's try to access protected member variable of class Employee from the class HR: >>> hrEmp = HR("Captain", 10000); >>> hrEmp._sal; 10000 >>> hrEmp.task(); We manage Employees - Private Access Modifier: While the addition of prefix __(double underscore) results in a member variable or function becoming private. # defining class Employee class Employee: def __init__(self, name, sal): self.__name = name; # private attribute self.__sal = sal; # private attribute If we want to access the private member variable, we will get an error. >>> emp = Employee("Bill", 10000); >>> emp.__sal; AttributeError: 'employee' object has no attribute '__sal' --------------------------------------------------------------------------

+ Name mangling (July 5, 2020, 9:16 a.m.)

In Python, there are no explicit access modifiers so you can’t mark a class member as public/private. Then the question is how to restrict access to a variable or method outside the class if required. A class member can be made private (close to private actually) using a process called name mangling in Python. In Name mangling process any identifier with at least two leading underscores, at most one trailing underscore is textually replaced with _classname__identifier where classname is the current class name. For example if there is a variable __var it is rewritten by the Python interpreter in the form _classname__var.

+ Wildcard imports should be avoided! (July 5, 2020, 8:43 a.m.)

Wildcard imports (from <module> import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools. There is one defensible use case for a wildcard import, which is to republish an internal interface as part of a public API (for example, overwriting a pure Python implementation of an interface with the definitions from an optional accelerator module and exactly which definitions will be overwritten isn’t known in advance).

+ Leading and Trailing Underscore (July 5, 2020, 8:40 a.m.)

Single Leading Underscore: _var The underscore prefix is meant as a hint to another programmer that a variable or method starting with a single underscore is intended for internal use. This isn’t enforced by Python. Python does not have strong distinctions between “private” and “public” variables like Java does. It’s like someone put up a tiny underscore warning sign that says: “Hey, this isn’t really meant to be a part of the public interface of this class. Best to leave it alone.” Take a look at the following example: class Test: def __init__(self): = 11 self._bar = 23 What’s going to happen if you instantiate this class and try to access the foo and _bar attributes defined in its __init__ constructor? Let’s find out: >>> t = Test() >>> 11 >>> t._bar 23 You just saw that the leading single underscore in _bar did not prevent us from “reaching into” the class and accessing the value of that variable. That’s because the single underscore prefix in Python is merely an agreed-upon convention, at least when it comes to variable and method names. However, leading underscores do impact how names get imported from modules. Imagine you had the following code in a module called my_module: # This is def external_func(): return 23 def _internal_func(): return 42 Now if you use a wildcard import to import all names from the module, Python will not import names with a leading underscore (unless the module defines an __all__ list that overrides this behavior): >>> from my_module import * >>> external_func() 23 >>> _internal_func() NameError: "name '_internal_func' is not defined" Unlike wildcard imports, regular imports are not affected by the leading single underscore naming convention: >>> import my_module >>> my_module.external_func() 23 >>> my_module._internal_func() 42 ------------------------------------------------------------------ Single Trailing Underscore: var_ Sometimes the most fitting name for a variable is already taken by a keyword. Therefore names like class or def cannot be used as variable names in Python. In this case, you can append a single underscore to break the naming conflict: >>> def make_object(name, class): SyntaxError: "invalid syntax" >>> def make_object(name, class_): pass In summary, a single trailing underscore (postfix) is used by convention to avoid naming conflicts with Python keywords. ------------------------------------------------------------------ Double Leading Underscore: __var A double underscore prefix causes the Python interpreter to rewrite the attribute name in order to avoid naming conflicts in subclasses. This is also called name mangling—the interpreter changes the name of the variable in a way that makes it harder to create collisions when the class is extended later. class Test: def __init__(self): = 11 self._bar = 23 self.__baz = 23 t = Test() dir(t) ['_Test__baz', '__class__', '__delattr__', '__dict__', ...., '__weakref__', '_bar', 'foo'] Does name mangling also apply to method names? It sure does—name mangling affects all names that start with two underscore characters (“dunders”) in a class context: ------------------------------------------------------------------ Double Leading and Trailing Underscore: __var__ Perhaps surprisingly, name mangling is not applied if a name starts and ends with double underscores. Variables surrounded by a double underscore prefix and postfix are left unscathed by the Python interpreter: class PrefixPostfixTest: def __init__(self): self.__bam__ = 42 >>> PrefixPostfixTest().__bam__ 42 However, names that have both leading and trailing double underscores are reserved for a special use in the language. This rule covers things like __init__ for object constructors, or __call__ to make an object callable. These dunder methods are often referred to as magic methods, but many people in the Python community, including myself, don’t like that. It’s best to stay away from using names that start and end with double underscores (“dunders”) in your own programs to avoid collisions with future changes to the Python language. ------------------------------------------------------------------ Single Underscore: _ In the interactive interpreter, the single underscore (_) is bound to the last expression evaluated. Per convention, a single standalone underscore is sometimes used as a name to indicate that a variable is temporary or insignificant. For example, in the following loop, we don’t need access to the running index and we can use “_” to indicate that it is just a temporary value: >>> for _ in range(32): ... print('Hello, World.') You can also use single underscores in unpacking expressions as a “don’t care” variable to ignore particular values. Again, this meaning is “per convention” only and there’s no special behavior triggered in the Python interpreter. The single underscore is simply a valid variable name that’s sometimes used for this purpose. In the following code example, I’m unpacking a car tuple into separate variables but I’m only interested in the values for color and mileage. However, in order for the unpacking expression to succeed I need to assign all values contained in the tuple to variables. That’s where “_” is useful as a placeholder variable: >>> car = ('red', 'auto', 12, 3812.4) >>> color, _, _, mileage = car >>> color 'red' >>> mileage 3812.4 >>> _ 12 Besides its use as a temporary variable, “_” is a special variable in most Python REPLs that represents the result of the last expression evaluated by the interpreter. This is handy if you’re working in an interpreter session and you’d like to access the result of a previous calculation. Or if you’re constructing objects on the fly and want to interact with them without assigning them a name first: >>> 20 + 3 23 >>> _ 23 >>> print(_) 23 >>> list() [] >>> _.append(1) >>> _.append(2) >>> _.append(3) >>> _ [1, 2, 3] ------------------------------------------------------------------

+ Dunder or magic methods (July 5, 2020, 8:40 a.m.)

Dunder or magic methods in Python are the methods having two prefix and suffix underscores in the method name. Dunder here means “Double Under (Underscores)”. These are commonly used for operator overloading. Few examples for magic methods are: __init__, __add__, __len__, __repr__ etc.

+ Monkey Patching (July 4, 2020, 4:02 p.m.)

Monkey patching is the modifications that are done to a class or a module during the runtime. This can only be done as Python supports changes in the behavior of the program while being executed. The following is an example, denoting monkey patching in Python: # class X: def func(self): print "func() is being called" The above module (monkeyy) is used to change the behavior of a function at the runtime as shown below: import monkeyy def monkey_f(self): print "monkey_f() is being called" # replacing address of “func” with “monkey_f” monkeyy.X.func = monkey_f obj = monk.X() # calling function “func” whose address got replaced # with function “monkey_f()” obj.func()

+ self-keyword (July 4, 2020, 4 p.m.)

Self-keyword is used as the first parameter of a function inside a class that represents the instance of the class. The object or the instance of the class is automatically passed to the method that it belongs to and is received in the ‘self-keyword.’ Users can use another name for the first parameter of the function that catches the object of the class, but it is recommended to use ‘self-keyword’ as it is more of a Python convention.

+ lambda Function (July 4, 2020, 2:39 p.m.)

A lambda function is an anonymous function (a function that does not have a name) in Python. To define anonymous functions, we use the "lambda" keyword instead of the "def" keyword, hence the name "lambda function". Lambda functions can have any number of arguments but only one statement. x = lambda a : a + 10 print(x(5)) ------------------------------------------------------------------ x = lambda a, b : a * b ------------------------------------------------------------------ (lambda x: x + 1)(2) (lambda x, y: x + y)(2, 3) ------------------------------------------------------------------ Because a lambda function is an expression, it can be named. Therefore you could write the previous code as follows: >>> add_one = lambda x: x + 1 >>> add_one(2) 3 ------------------------------------------------------------------ >>> full_name = lambda first, last: f'Full name: {first.title()} {last.title()}' >>> full_name('guido', 'van rossum') ------------------------------------------------------------------

+ Is Python fully object oriented? (July 4, 2020, 2:39 p.m.)

Python does follow an object-oriented programming paradigm and has all the basic OOPs concepts such as inheritance, polymorphism, and more, with the exception of access specifiers. Python doesn’t support strong encapsulation (adding a private keyword before data members). Although, it has a convention that can be used for data hiding, i.e., prefixing a data member with two underscores.

+ Tkinter (July 4, 2020, 2:34 p.m.)

Tkinter is an in-built Python module that is used to create GUI applications. It is Python’s standard toolkit for GUI development. Tkinter comes with Python, so there is no installation needed. We can start using it by importing it in our script.

+ __init__ method (July 4, 2020, 2:27 p.m.)

Equivalent to constructors in OOP terminology, __init__ is a reserved method in Python classes. The __init__ method is called automatically whenever a new object is initiated. This method allocates memory to the new object as soon as it is created. This method can also be used to initialize variables.

+ NumPy and SciPy (July 4, 2020, 2:22 p.m.)

- NumPy stands for Numerical Python. - SciPy stands for Scientific Python. - NumPy is used for efficient and general numeric computations on numerical data saved in arrays. E.g., sorting, indexing, reshaping, and more. - SciPy module is a collection of tools in Python used to perform operations such as integration, differentiation, and more. - There are some linear algebraic functions available in this module, but they are not full-fledged. - Full-fledged algebraic functions are available in SciPy for algebraic computations.

+ Arrays and Lists (July 4, 2020, 2:17 p.m.)

In Python, when we say "arrays", we are usually referring to "lists". It is because lists are fundamental to Python just as arrays are fundamental to most of the low-level languages. However, there is indeed a module named "array" in Python, which is used or mentioned very rarely. Following are some of the differences between Python arrays and Python lists. - Arrays can only store homogeneous data (data of the same type). - Lists can store heterogeneous and arbitrary data. - Since only one type of data can be stored, arrays use memory for only one type of objects. Thus, mostly, arrays use lesser memory than lists. - Lists can store data of multiple data types and thus require more memory than arrays. - The length of an array is pre-fixed while creating it, so more elements cannot be added. - Since the length of a list is not fixed, appending items to it is possible.

+ File Processing Modes (July 4, 2020, 2:14 p.m.)

For opening files, there are three modes: - read-only mode (r) - write-only mode (w) - read–write mode (rw) For opening a text file using the above modes, we will have to append ‘t’ with them as follows: - read-only mode (rt) - write-only mode (wt) - read–write mode (rwt) Similarly, a binary file can be opened by appending ‘b’ with them as follows: - read-only mode (rb) - write-only mode (wb) - read–write mode (rwb) To append the content in the files, we can use the append mode (a): - For text files, the mode would be ‘at’ - For binary files, it would be ‘ab’ ----------------------------------------------------------------------------- 'r' - reading mode. The default. When using this mode the file must exist. 'w' - writing mode. It will create a new file if it does not exist, otherwise will erase the file and allow you to write to it. 'a' - append mode. It will write data to the end of the file. It does not erase the file, and the file must exist for this mode. 'rb' - reading mode in binary. This is similar to r except that the reading is forced in binary mode. This is also a default choice. 'r+' - reading mode plus writing mode at the same time. This allows you to read and write into files at the same time without having to use r and w. 'rb+' - reading and writing mode in binary. The same as r+ except the data is in binary 'wb' - writing mode in binary. The same as w except the data is in binary. 'w+' - writing and reading mode. The exact same as r+ but if the file does not exist, a new one is made. Otherwise, the file is overwritten. 'wb+' - writing and reading mode in binary mode. The same as w+ but the data is in binary. 'ab' - appending in binary mode. Similar to a except that the data is in binary. 'a+' - appending and reading mode. Similar to w+ as it will create a new file if the file does not exist. Otherwise, the file pointer is at the end of the file if it exists. 'ab+' - appending and reading mode in binary. The same as a+ except that the data is in binary. 'x' - open for exclusive creation, will raise FileExistsError if the file already exists. Python 3 added this new 'x' mode for exclusive creation so that you will not accidentally truncate or overwrite and existing file. 'xb' - open for exclusive creation writing mode in binary. The same as x except the data is in binary. 'x+' - reading and writing mode. Similar to w+ as it will create a new file if the file does not exist. Otherwise, will raise FileExistsError. 'xb+' - writing and reading mode. The exact same as x+ but the data is binary. -----------------------------------------------------------------------------

+ "with" statement (July 4, 2020, 2:12 p.m.)

Using the ‘with’ statement we can open a file and close it as soon as the block of code, where "with" is used, exits. In this way, we can opt for not using the close() method. with open("filename", "mode") as file_var: pass

+ REPL (July 4, 2020, 2 p.m.)

A REPL (say it, “REP-UL”) is an interactive way to talk to your computer in Python. To make this work, the computer does four things: - Read the user input (your Python commands). - Evaluate your code (to work out what you mean). - Print any results (so you can see the computer’s response). - Loopback to step 1 (to continue the conversation).

+ map function (July 4, 2020, 1:52 p.m.)

The map() function in Python has two parameters, "function" and "iterable". The map() function takes a function as an argument and then applies that function to all the elements of an iterable, passed to it as another argument. It returns an object list of results. For example: def calculateSq(n): return n*n numbers = (2, 3, 4, 5) result = map( calculateSq, numbers) print(result)

+ Pickling and Unpickling (July 4, 2020, 1:51 p.m.)

Pickling is the process of converting Python objects, such as lists, dicts, etc., into a character stream. This is done using a module named ‘pickle’, hence the name pickling. The process of retrieving the original Python objects from the stored string representation, which is the reverse of the pickling process, is called unpickling.

+ split(), sub(), and subn() (July 4, 2020, 11:53 a.m.)

These methods belong to Python RegEx ‘re’ module and are used to modify strings. split(): This method is used to split a given string into a list. from re import split print(split('\W+', 'Words, words , Words')) print(split('\W+', "Word's words Words")) print(split('\W+', 'On 12th Jan 2016, at 11:02 AM')) print(split('\d+', 'On 12th Jan 2016, at 11:02 AM')) output: ['Words', 'words', 'Words'] ['Word', 's', 'words', 'Words'] ['On', '12th', 'Jan', '2016', 'at', '11', '02', 'AM'] ['On ', 'th Jan ', ', at ', ':', ' AM'] ---------------------------------------------------------------------- - sub(): This method is used to find a substring where a regex pattern matches, and then it replaces the matched substring with a different string. import re print(re.sub('ub', '~*' , 'Subject has Uber booked already', flags = re.IGNORECASE)) print(re.sub('ub', '~*' , 'Subject has Uber booked already')) print(re.sub('ub', '~*' , 'Subject has Uber booked already', count=1, flags = re.IGNORECASE)) print(re.sub(r'\sAND\s', ' & ', 'Baked Beans And Spam', flags=re.IGNORECASE)) Output: S~*ject has ~*er booked already S~*ject has Uber booked already S~*ject has Uber booked already Baked Beans & Spam ---------------------------------------------------------------------- - subn(): This method is similar to the sub() method, but it returns the new string, along with the number of replacements. import re print(re.subn('ub', '~*' , 'Subject has Uber booked already')) t = re.subn('ub', '~*' , 'Subject has Uber booked already', flags = re.IGNORECASE) print(t) print(len(t)) print(t[0]) Output: ('S~*ject has Uber booked already', 1) ('S~*ject has ~*er booked already', 2) Length of Tuple is: 2 S~*ject has ~*er booked already ----------------------------------------------------------------------

+ How is memory managed in Python? (July 4, 2020, 11:47 a.m.)

- Memory in Python is managed by Python private heap space. All Python objects and data structures are located in a private heap. This private heap is taken care of by Python Interpreter itself, and a programmer doesn’t have access to this private heap. - Python memory manager takes care of the allocation of Python private heap space. - Memory for Python private heap space is made available by Python’s in-built garbage collector, which recycles and frees up all the unused memory.

+ Difference between Lists and Tuples (July 4, 2020, 11:45 a.m.)

Lists are mutable, i.e., they can be edited. Tuples are immutable (they are lists that cannot be edited). Lists are usually slower than tuples. Tuples are faster than lists.

+ PYTHONSTARTUP, PYTHONCASEOK, and PYTHONHOME (July 4, 2020, 11:39 a.m.)

- PYTHONSTARTUP: It contains the path of an initialization file having Python source code. It is executed every time we start the interpreter. It is named as in Unix, and it contains commands that load utilities or modify PYTHONPATH. - PYTHONCASEOK: It is used in Windows to instruct Python to find the first case-insensitive match in an import statement. We can set this variable with any value to activate it. - PYTHONHOME: It is an alternative module search path. It is usually embedded in PYTHONSTARTUP or PYTHONPATH directories to make switching of module libraries easy.

+ Static vs. dynamic typing (July 4, 2020, 11:04 a.m.)

Statically Typed Language: In a statically typed language, every variable name is bound both: - to a type (at compile time, by means of a data declaration) - to an object. The binding to an object is optional — if a name is not bound to an object, the name is said to be null. Once a variable name has been bound to a type (that is, declared) it can be bound (via an assignment statement) only to objects of that type; it cannot ever be bound to an object of a different type. An attempt to bind the name to an object of the wrong type will raise a type exception. ------------------------------------------------------------ Dynamically Typed Language: In a dynamically typed language, every variable name is (unless it is null) bound only to an object. Names are bound to objects at execution time by means of assignment statements, and it is possible to bind a name to objects of different types during the execution of the program. ------------------------------------------------------------ Python doesn’t know about the type of the variable until the code is run. So the declaration is of no use. What it does is, It stores that value at some memory location and then binds that variable name to that memory container. And makes the contents of the container accessible through that variable name. So the data type does not matter. As it will get to know the type of the value at run-time. ------------------------------------------------------------

+ Coroutines (July 1, 2020, 10:53 a.m.)

A coroutine (short for cooperative subroutine) describes code that actively facilitates the needs of other parts of a system. We all are familiar with "function" which is also known as a "subroutine", "procedure", "subprocess", etc. A function is a sequence of instructions packed as a unit to perform a certain task. When the logic of a complex function is divided into several self-contained steps that are themselves functions, then these functions are called helper functions or subroutines. Coroutines are generalization of subroutines. They are used for cooperative multitasking where a process voluntarily yield (give away) control periodically or when idle in order to enable multiple applications to be run simultaneously. When a program calls a function its current execution context is saved before passing control over to the function and resuming execution. The function then creates a new context - from there on out newly created data exists exclusively during the functions runtime. As soon as the task is complete, control is transferred back to the caller - the new context is effectively deleted and replaced by the previous one. Coroutines are a special type of function that deliberately yield control over to the caller, but does not end its context in the process, instead maintaining it in an idle state. They benefit from the ability to keep their data throughout their lifetime and, unlike functions, can have several entry points for suspending and resuming execution. Coroutines in Python work in a very similar way to Generators. Both operate over data, so let's keep the main differences simple: - Generators produce data - Coroutines consume data The distinct handling of the keyword "yield" determines whether we are manipulating one or the other. ------------------------------------------------------------------------------ Defining a Coroutine def bare_bones(): while True: value = (yield) It's clear to see the resemblance to a regular Python function. The "while True:" block guarantees the continuous execution of the coroutine for as long as it receives values. The value is collected through the "yield" statement. It's clear to see that this code is practically useless, so we'll round it off with a few print statements: def bare_bones(): print("My first Coroutine!") while True: value = (yield) print(value) Now, what happens when we try to call it like so: coroutine = bare_bones() If this were a normal Python function, one would expect it to produce some sort of output by this point. But if you run the code in its current state you will notice that not a single print() gets called. That is because coroutines require the next() method to be called first: def bare_bones(): print("My first Coroutine!") while True: value = (yield) print(value) coroutine = bare_bones() next(coroutine) This starts the execution of the coroutine until it reaches its first breakpoint - value = (yield). Then, it stops, returning the execution over to the main, and idles while awaiting new input: My first Coroutine! New input can be sent with send(): coroutine.send("First Value") Our variable value will then receive the string First Value, print it, and a new iteration of the while True: loop forces the coroutine to once again wait for new values to be delivered. You can do this as many times as you like. Finally, once you are done with the coroutine and no longer wish to make use of it you can free those resources by calling close(). This raises a GeneratorExit exception that needs to be dealt with: def bare_bones(): print("My first Coroutine!") try: while True: value = (yield) print(value) except GeneratorExit: print("Exiting coroutine...") coroutine = bare_bones() next(coroutine) coroutine.send("First Value") coroutine.send("Second Value") coroutine.close() Output: My first Coroutine! First Value Second Value Exiting coroutine... ------------------------------------------------------------------------------ The difference between coroutine and subroutine is : - Unlike subroutines, coroutines have many entry points for suspending and resuming execution. A coroutine can suspend its execution and transfer control to other coroutine and can resume again execution from the point it left off. - Unlike subroutines, there is no main function to call coroutines in a particular order and coordinate the results. Coroutines are cooperative, which means they link together to form a pipeline. One coroutine may consume input data and send it to the other which processes it. Finally, there may be a coroutine to display the result. ------------------------------------------------------------------------------ Coroutine Vs Thread: You might be thinking how coroutine is different from threads, both seem to do the same job. In case of threads, it’s the operating system (or run time environment) that switches between threads according to the scheduler. While in case of a coroutine, it’s the programmer and programming language which decides when to switch coroutines. Coroutines work cooperatively multitask by suspending and resuming at set points by the programmer. ------------------------------------------------------------------------------ Python Coroutine In Python, coroutines are similar to generators but with few extra methods and slight changes in how we use "yield" statement. Generators produce data for iteration while coroutines can also consume data. In Python 2.5, a slight modification to the yield statement was introduced, now yield can also be used as an expression. For example on the right side of the assignment: line = (yield) whatever value we send to coroutine is captured and returned by (yield) expression. A value can be sent to the coroutine by send() method. For example, consider this coroutine which prints out name having prefix “Dear” in it. We will send names to the coroutine using the "send()" method. # Python3 program for demonstrating coroutine execution def print_name(prefix): print("Searching prefix:{}".format(prefix)) while True: name = (yield) if prefix in name: print(name) # calling coroutine, nothing will happen corou = print_name("Dear") # This will start execution of coroutine and prints first line "Searching prefix..." and advance execution to the first yield expression corou.__next__() # sending inputs corou.send("David") corou.send("Dear David") Output: Searching prefix: Dear Dear Atul Execution of coroutine is similar to the generator. When we call coroutine nothing happens, it runs only in response to the next() and send() methods. ------------------------------------------------------------------------------ Closing a Coroutine Coroutine might run indefinitely, to close a coroutine, the "close()" method is used. When coroutine is closed it generates GeneratorExit exception which can be caught in the usual way. After closing coroutine, if we try to send values, it will raise StopIteration exception. ------------------------------------------------------------------------------ Chaining coroutines for creating pipeline Coroutines can be used to set pipes. We can chain together coroutines and push data through pipe using send() method. A pipe needs : - An initial source (producer), which derives the whole pipeline. The producer is usually not a coroutine, it’s just a simple method. - A sink, which is the endpoint of the pipe. A sink might collect all data and display it. ------------------------------------------------------------------------------ Passing Arguments Much like functions, coroutines are also capable of receiving arguments: def filter_line(num): while True: line = (yield) if num in line: print(line) cor = filter_line("33") next(cor) cor.send("Jessica, age:24") cor.send("Marco, age:33") cor.send("Filipe, age:55") Output: Marco, age:33 ------------------------------------------------------------------------------ Applying Several Breakpoints Multiple yield statements can be sequenced together in the same individual coroutine: def joint_print(): while True: part_1 = (yield) part_2 = (yield) print("{} {}".format(part_1, part_2)) cor = joint_print() next(cor) cor.send("So Far") cor.send("So Good") Output: So Far So Good ------------------------------------------------------------------------------ Coroutines with Decorators This is all well and good! But when working in larger projects initiating every single coroutine manually can be such a huge drag! Worry not, its just the matter of exploiting the power of Decorators so we no longer need to use the next() method: def coroutine(func): def start(*args, **kwargs): cr = func(*args, **kwargs) next(cr) return cr return start @coroutine def bare_bones(): while True: value = (yield) print(value) cor = bare_bones() cor.send("Using a decorator!") Running this piece of code will yield: Using a decorator! ------------------------------------------------------------------------------ Building Pipelines A pipeline is a sequence of processing elements organized so that the output of each element is the input of the next. Data gets pushed through the pipe until it is eventually consumed. Every pipeline requires at least one source and one sink. The remaining stages of the pipe can perform several different operations, from filtering to modifying, routing, and reducing data: Coroutines are natural candidates for performing these operations, they can pass data between one another with send() operations and can also serve as the end-point consumer. Let's look at the following example: def producer(cor): n = 1 while n < 100: cor.send(n) n = n * 2 @coroutine def my_filter(num, cor): while True: n = (yield) if n < num: cor.send(n) @coroutine def printer(): while True: n = (yield) print(n) prnt = printer() filt = my_filter(50, prnt) producer(filt) Output: 1 2 4 8 16 32 So, what we have here is the producer() acting as the source, creating some values that are then filtered before being printed by the sink, in this case, the printer() coroutine. my_filter(50, prnt) acts as the single intermediary step in the pipeline and receives its own coroutine as an argument. This chaining perfectly illustrates the strength of coroutines: they are scalable for bigger projects (all that is required is to add more stages to the pipeline) and easily maintainable (changes to one don't force an entire rewrite of the source code). ------------------------------------------------------------------------------ Caution when Using Coroutines The send() Method is Not Thread-Safe import threading from time import sleep def print_number(cor): while True: cor.send(1) def coroutine(): i = 1 while True: num = (yield) print(i) sleep(3) i += num cor = coroutine() next(cor) t = threading.Thread(target=print_number, args=(cor,)) t.start() while True: cor.send(5) Because send() was not properly synchronized, nor does it have inherent protection against thread related miscalls, the following error was raised: ValueError: generator already executing. Mixing coroutines with concurrency should be done with extreme caution. ------------------------------------------------------------------------------ It's not Possible to Loop Coroutines def coroutine_1(value): while True: next_cor = (yield) print(value) value = value - 1 if next_cor != None: next_cor.send(value) def coroutine_2(next_cor): while True: value = (yield) print(value) value = value - 2 if next != None: next_cor.send(value) cor1 = coroutine_1(20) next(cor1) cor2 = coroutine_2(cor1) next(cor2) cor1.send(cor2) The same ValueError shows its face. From these simple examples we can infer that the send() method builds a sort of call-stack that doesn't return until the target reaches its yield statement. So, using coroutines is not all sunshine and rainbows, careful thought must be had before application. ------------------------------------------------------------------------------

+ Multiprocessing, Threading, and Asynchrony (July 1, 2020, 9:37 a.m.)

Strategies for minimizing the delays of blocking I/O fall into three major categories: multiprocessing, threading, and asynchrony. ------------------------------------------------------------------------------- Multiprocessing Multiprocessing is a form of parallel computing: instructions are executed in an overlapping time frame on multiple physical processors or cores. Each process spawned by the kernel incurs an overhead cost, including an independently-allocated chunk of memory (heap). Python implements parallelism with the "multiprocessing" module. The following is an example of a Python 3 program that spawns four child processes, each of which exhibits a random, independent delay. The output shows the process ID of each child, the system time before and after each delay, and the current and peak memory allocation at each step. from multiprocessing import Process import os, time, datetime, random, tracemalloc tracemalloc.start() children = 4 # number of child processes to spawn maxdelay = 6 # maximum delay in seconds def status(): return ('Time: ' + str( + '\t Malloc, Peak: ' + str(tracemalloc.get_traced_memory())) def child(num): delay = random.randrange(maxdelay) print(f"{status()}\t\tProcess {num}, PID: {os.getpid()}, Delay: {delay} seconds...") time.sleep(delay) print(f"{status()}\t\tProcess {num}: Done.") if __name__ == '__main__': print(f"Parent PID: {os.getpid()}") for i in range(children): proc = Process(target=child, args=(i,)) proc.start() Output: Parent PID: 16048 Time: 09:52:47.014906 Malloc, Peak: (228400, 240036) Process 0, PID: 16051, Delay: 1 seconds... Time: 09:52:47.016517 Malloc, Peak: (231240, 240036) Process 1, PID: 16052, Delay: 4 seconds... Time: 09:52:47.018786 Malloc, Peak: (231616, 240036) Process 2, PID: 16053, Delay: 3 seconds... Time: 09:52:47.019398 Malloc, Peak: (232264, 240036) Process 3, PID: 16054, Delay: 2 seconds... Time: 09:52:48.017104 Malloc, Peak: (228434, 240036) Process 0: Done. Time: 09:52:49.021636 Malloc, Peak: (232298, 240036) Process 3: Done. Time: 09:52:50.022087 Malloc, Peak: (231650, 240036) Process 2: Done. Time: 09:52:51.020856 Malloc, Peak: (231274, 240036) Process 1: Done. ------------------------------------------------------------------------------- Threading Threading is an alternative to multiprocessing, with benefits and downsides. Threads are independently scheduled, and their execution may occur within an overlapping time period. Unlike multiprocessing, however, threads exist entirely in a single kernel process and share a single allocated heap. Python threads are concurrent — multiple sequences of machine code are executed in overlapping time frames. But they are not parallel — execution does not occur simultaneously on multiple physical cores. The primary downsides to Python threading are memory safety and race conditions. All child threads of a parent process operate in the same shared memory space. Without additional protections, one thread may overwrite a shared value in memory without other threads being aware of it. Such data corruption would be disastrous. To enforce thread safety, CPython implementations use a global interpreter lock (GIL). The GIL is a mutex mechanism that prevents multiple threads from executing simultaneously on Python objects. Effectively, this means that only one thread runs at any given time. Here's the threaded version of the multiprocessing example from the previous section. Notice that very little has changed: "multiprocessing.Process" is replaced with "threading.Thread". As indicated in the output, everything happens in a single process, and the memory footprint is significantly smaller. from threading import Thread import os, time, datetime, random, tracemalloc tracemalloc.start() children = 4 # number of child threads to spawn maxdelay = 6 # maximum delay in seconds def status(): return ('Time: ' + str( + '\t Malloc, Peak: ' + str(tracemalloc.get_traced_memory())) def child(num): delay = random.randrange(maxdelay) print(f"{status()}\t\tProcess {num}, PID: {os.getpid()}, Delay: {delay} seconds...") time.sleep(delay) print(f"{status()}\t\tProcess {num}: Done.") if __name__ == '__main__': print(f"Parent PID: {os.getpid()}") for i in range(children): thr = Thread(target=child, args=(i,)) thr.start() Output: Parent PID: 19770 Time: 10:44:40.942558 Malloc, Peak: (9150, 9264) Process 0, PID: 19770, Delay: 3 seconds... Time: 10:44:40.942937 Malloc, Peak: (13989, 14103) Process 1, PID: 19770, Delay: 5 seconds... Time: 10:44:40.943298 Malloc, Peak: (18734, 18848) Process 2, PID: 19770, Delay: 3 seconds... Time: 10:44:40.943746 Malloc, Peak: (23959, 24073) Process 3, PID: 19770, Delay: 2 seconds... Time: 10:44:42.945896 Malloc, Peak: (26599, 26713) Process 3: Done. Time: 10:44:43.945739 Malloc, Peak: (26741, 27223) Process 0: Done. Time: 10:44:43.945942 Malloc, Peak: (26851, 27333) Process 2: Done. Time: 10:44:45.948107 Malloc, Peak: (24639, 27475) Process 1: Done. ------------------------------------------------------------------------------- Asynchrony Asynchrony is an alternative to threading for writing concurrent applications. Asynchronous events occur on independent schedules, "out of sync" with one another, entirely within a single thread. Unlike threading, in asynchronous programs the programmer controls when and how voluntary preemption occurs, making it easier to isolate and avoid race conditions. -------------------------------------------------------------------------------

+ Generators (June 29, 2020, 11:50 a.m.)

A generator function is a special kind of function that returns a lazy iterator. Unlike lists, lazy iterators do not store their contents in memory. These are objects that you can loop over like a list. However, unlike lists, lazy iterators do not store their contents in memory. Generator functions look and act just like regular functions, but with one defining characteristic. Generator functions use the Python yield keyword instead of return. You’ll have no memory penalty when you use generator expressions. ------------------------------------------------------------------ Generators are very easy to implement, but a bit difficult to understand. Generators are used to create iterators, but with a different approach. Generators are simple functions, which return an iterable set of items, one at a time, in a special way. When an iteration over a set of items starts using the "for" statement, the generator is run. Once the generator's function code reaches a "yield" statement, the generator yields its execution back to the for loop, returning a new value from the set. The generator function can generate as many values (possibly infinite) as it wants, yielding each one in its turn. ------------------------------------------------------------------ import random def lottery(): # returns 6 numbers between 1 and 40 for i in range(6): yield random.randint(1, 40) # returns a 7th number between 1 and 15 yield random.randint(1, 15) for random_number in lottery(): print("And the next number is... %d!" %(random_number)) ------------------------------------------------------------------ Using yield will result in a generator object. Using return will result in the first line of the file only. ------------------------------------------------------------------ Building Generators With Generator Expressions: nums_squared_lc = [num**2 for num in range(5)] # List nums_squared_gc = (num**2 for num in range(5)) # Generator ------------------------------------------------------------------ >>> l = [x for x in 'mohsen'] >>> print(l) ['m', 'o', 'h', 's', 'e', 'n'] >>> g = (x for x in 'mohsen') >>> print(g) <generator object <genexpr> at 0x7f2bd01b0650> >>> 'm' >>> next(g) 'o' >>> next(g) 'h' >>> next(g) 's' >>> next(g) 'e' >>> next(g) 'n' >>> next(g) Traceback (most recent call last): File "<input>", line 1, in <module> StopIteration try: print(next(g)) except StopIteration: pass ------------------------------------------------------------------ file_name = "techcrunch.csv" lines = (line for line in open(file_name)) list_line = (s.rstrip().split(",") for s in lines) ------------------------------------------------------------------

+ asyncio / concurrent.futures (June 29, 2020, 8:13 a.m.)

asyncio is designed to solve I/O network performance, not CPU bound operations (which is where multiprocessing should be used). So asyncio is not a replacement for all types of asynchronous execution. --------------------------------------------------------------------------- There are many modules provided by the Python standard library for handling asynchronous, concurrent, multiprocess code … _thread threading multiprocessing asyncio concurrent.futures One of the issues with writing concurrent code (using either the _thread or threading modules) is that you suffer the cost of "CPU context switching" (as a CPU core can only run one thread at a time) which although quick, isn’t free. Multi-threaded code also has to deal with issues such as "race conditions", "dead/live locks" and "resource starvation" (where some threads are over-utilized and others are underutilized). Asyncio avoids these issues. --------------------------------------------------------------------------- asyncio is a library to write concurrent code using the async/await syntax. The asyncio module provides both high-level and low-level APIs. Library and Framework developers will be expected to use the low-level APIs, while all other users are encouraged to use the high-level APIs. --------------------------------------------------------------------------- Asyncio is designed around the concept of "cooperative multitasking", so you have complete control over when a CPU "context switch" occurs (i.e. context switching happens at the application level and not the hardware level). When using threads the Python scheduler is responsible for this, and so your application may context switch at any moment (i.e. it becomes non-deterministic). This means when using threads you’ll need to also use some form of ‘lock’ mechanism to prevent multiple threads from accessing/mutating shared memory (which would otherwise subsequently cause your program to become non-thread-safe). --------------------------------------------------------------------------- concurrent.futures The concurrent.futures provides a high-level abstraction for the "threading" and "multiprocessing" modules, In fact, the "_thread" module is a very low-level API that the "threading" module is itself built on top of. --------------------------------------------------------------------------- Now we’ve already mentioned that asyncio helps us avoid using threads so why would we want to use "concurrent.futures" if it’s just an abstraction on top of threads (and multiprocessing)? Well, because not all libraries/modules/APIs support the asyncio model. --------------------------------------------------------------------------- There are many ways to achieve asynchronous programming. There’s the event loop approach (which asyncio implements), a "callback" style historically favored by single-threaded languages such as JavaScript, and more traditionally there has been a concept known as "green threads". --------------------------------------------------------------------------- The core element of all asyncio applications is the "event loop". The event loop is what schedules and runs asynchronous tasks. --------------------------------------------------------------------------- What makes the asyncio event loop so effective is the fact that Python implements it around generators. A generator enables a function to be partially executed, then halt its execution at a specific point, maintaining a stack of objects and exceptions, before resuming again. --------------------------------------------------------------------------- By default, when your program accesses data from an I/O source, it waits for that operation to complete before continuing to execute the program. with open('myfile.txt', 'r') as file: data = # Until the data is read into memory, the program waits here print(data) The program is blocked from continuing its flow of execution while a physical device is accessed, and data is transferred. Network operations are another common source of blocking: # pip install --user requests import requests req = requests.get('') # # Blocking occurs here, waiting for completion of an HTTPS request # print(req.text) In many cases, the delay caused by blocking is negligible. However, blocking I/O scales very poorly. If you need to wait for 10**10 file reads or network transactions, performance will suffer. --------------------------------------------------------------------------- High-Level vs Low-Level asyncio API Asyncio components are divided into high-level APIs (for writing programs), and low-level APIs (for writing libraries or frameworks based on asyncio). Every asyncio program can be written using only the high-level APIs. If you're not writing a framework or library, you never need to touch the low-level stuff. ---------------------------------------------------------------------------

+ Code Coverage (June 28, 2020, 1:23 p.m.)

Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not. In general, a code coverage system collects information about the running program and then combines that with source information to generate a report on the test suite's code coverage. ------------------------------------------------------------------- Code coverage is a software testing metric that can help in assessing the test performance and quality aspects of any software. Code coverage is a software testing metric that determines the number of lines of code that is successfully validated under a test procedure, which in turn, helps in analyzing how comprehensively a software is verified. ------------------------------------------------------------------- Code coverage analysis can only be used for the validation of test cases that are run on the source code and not for the evaluation of the software product. Also, it neither evaluates whether the source code is bug-free nor proves if a written code is correct. Then, why is it important? - Easy maintenance of code base -- Writing scalable code is crucial to extend the software program through the introduction of new or modified functionalities. However, it is difficult to determine whether the written code is scalable. It can prove to be a useful metric in that context. The analysis report will help developers to ensure code quality is well-maintained and new features can be added with little-to-no efforts. - Exposure of bad code -- Continuous analysis will help developers to understand bad, dead, and unused code. As a result, they can improve code-writing practices, which in turn, will result in better maintainability of the product quality. - Faster time to market -- With the help of this metric, developers can finish the software development process faster, thereby increasing their productivity and efficiency. As a result, they will be able to deliver more products, allowing companies to launch more software applications on the market in lesser time. This will undoubtedly lead to increased customer satisfaction and high ROI. -------------------------------------------------------------------

+ Arrays (June 27, 2020, 3:43 p.m.)

list – Mutable Dynamic Arrays: Lists are a part of the core Python language. Despite their name, Python’s lists are implemented as dynamic arrays behind the scenes. This means lists allow elements to be added or removed and they will automatically adjust the backing store that holds these elements by allocating or releasing memory. Python lists can hold arbitrary elements—“everything” is an object in Python, including functions. Therefore you can mix and match different kinds of data types and store them all in a single list. ------------------------------------------------------------------------ tuple – Immutable Containers: Tuples are a part of the Python core language. Unlike lists Python’s tuple objects are immutable, this means elements can’t be added or removed dynamically—all elements in a tuple must be defined at creation time. Just like lists, tuples can hold elements of arbitrary data types. Having this flexibility is powerful, but again it also means that data is less tightly packed than it would be in a typed array. ------------------------------------------------------------------------ array.array – Basic Typed Arrays: Python’s array module provides space-efficient storage of basic C-style data types like bytes, 32-bit integers, floating-point numbers, and so on. Arrays created with the array. array class are mutable and behave similarly to lists—except they are “typed arrays” constrained to a single data type. Because of this constraint array.array objects with many elements are more space-efficient than lists and tuples. The elements stored in them are tightly packed and this can be useful if you need to store many elements of the same type. Also, arrays support many of the same methods as regular lists. For example, to append to an array in Python you can just use the familiar array.append() method. As a result of this similarity between Python lists and array objects, you might be able to use it as a “drop-in replacement” without requiring major changes to your application. ------------------------------------------------------------------------ str – Immutable Arrays of Unicode Characters: Python uses str objects to store textual data as immutable sequences of Unicode characters. Practically speaking that means a str is an immutable array of characters. Oddly enough it’s also a recursive data structure—each character in a string is a str object of length 1 itself. String objects are space-efficient because they’re tightly packed and specialize in a single data type. If you’re storing Unicode text you should use them. Because strings are immutable in Python modifying a string requires creating a modified copy. The closest equivalent to a “mutable string” is storing individual characters inside a list. ------------------------------------------------------------------------ bytes – Immutable Arrays of Single Bytes: Bytes objects are immutable sequences of single bytes (integers in the range of 0 <= x <= 255). Conceptually they’re similar to str objects and you can also think of them as immutable arrays of bytes. Like strings, bytes have their own literal syntax for creating objects and they’re space-efficient. Bytes objects are immutable, but unlike strings there’s a dedicated “mutable byte array” data type called bytearray that they can be unpacked into. ------------------------------------------------------------------------ bytearray – Mutable Arrays of Single Bytes: The bytearray type is a mutable sequence of integers in the range 0 <= x <= 255. They’re closely related to bytes objects with the main difference being that bytearrays can be modified freely—you can overwrite elements, remove existing elements, or add new ones. The bytearray object will grow and shrink appropriately. Bytearrays can be converted back into immutable bytes objects but this incurs copying the stored data in full—an operation taking O(n) time. >>> arr = bytearray((0, 1, 2, 3)) >>> arr[1] 1 # The bytearray repr: >>> arr bytearray(b'\x00\x01\x02\x03') # Bytearrays are mutable: >>> arr[1] = 23 >>> arr bytearray(b'\x00\x17\x02\x03') >>> arr[1] 23 ------------------------------------------------------------------------

+ argparse (June 27, 2020, 3:11 p.m.)

This module was released as a replacement for the older getopt and optparse modules because they were lacking some important features. argparse is the “recommended command-line parsing module in the Python standard library.” It’s what you use to get command line arguments into your program. import argparse parser.add_argument('--list', default='all', const='all', nargs='?', choices=['servers', 'storage', 'all'], help='list servers, storage, or both (default: %(default)s)') ---------------------------------------------------------------- # import argparse my_parser = argparse.ArgumentParser() my_parser.add_argument('--input', action='store', type=int, required=True) my_parser.add_argument('--id', action='store', type=int) args = my_parser.parse_args() print(args.input) ----------------------------------------------------------------

+ Duck Typing (June 27, 2020, 2:03 p.m.)

This term comes from the saying “If it walks like a duck, and it quacks like a duck, then it must be a duck.” (There are other variations). Duck typing is a concept related to dynamic typing, where the type or the class of an object is less important than the methods it defines. When you use duck typing, you do not check types at all. Instead, you check for the presence of a given method or attribute. For example, you can call len() on any Python object that defines a .__len__() method: >>> class TheHobbit: ... def __len__(self): ... return 95022 ... ... >>> the_hobbit = TheHobbit() >>> the_hobbit <__main__.TheHobbit object at 0x108deeef0> >>> len(the_hobbit) 95022 >>> my_str = "Hello World" >>> my_list = [34, 54, 65, 78] >>> my_dict = {"one": 123, "two": 456, "three": 789} >>> len(my_str) 11 >>> len(my_list) 4 >>> len(my_dict) 3 >>> len(the_hobbit) 95022 >>> my_int = 7 >>> my_float = 42.3 >>> len(my_int) Traceback (most recent call last): File "<input>", line 1, in <module> len(my_int) TypeError: object of type 'int' has no len() >>> len(my_float) Traceback (most recent call last): File "<input>", line 1, in <module> len(my_float) TypeError: object of type 'float' has no len() In order for you to call len(obj), the only real constraint on obj is that it must define a .__len__() method. Otherwise, the object can be of types as different as str, list, dict, or TheHobbit.

+ __mro__ - Method Resolution Order (June 27, 2020, 1:41 p.m.)

Method Resolution Order (MRO) is the order in which Python looks for a method in a hierarchy of classes. Especially it plays a vital role in the context of multiple inheritance as a single method may be found in multiple superclasses.

+ Interface / Informal Interfaces (June 24, 2020, 4:13 p.m.)

Python Interface: At a high level, an interface acts as a “skeleton” or "blueprint" for designing classes. Like classes, interfaces define methods. Unlike classes, these methods are abstract. An abstract method is one that the interface simply defines. It doesn’t implement the methods. This is done by classes, which then implement the interface and give concrete meaning to the interface’s abstract methods. Python’s approach to interface design is somewhat different when compared to languages like Java, Go, and C++. These languages all have an "interface" keyword, while Python does not. ---------------------------------------------------------------- Informal Interfaces: In certain circumstances, you may not need the strict rules of a formal Python interface. Python’s dynamic nature allows you to implement an informal interface. An informal Python interface is a class that defines methods that can be overridden, but there’s no strict enforcement. ----------------------------------------------------------------

+ abc - Abstract Base Classes (June 24, 2020, 3:29 p.m.)

Abstract Classes: It is often useful to create an abstract class to serve as a “skeleton” or "blueprint" for a subclass. However, Python does not enforce abstract base class inheritance by default, meaning subclasses are not required to implement abstract methods of the parent class. Example: from abc import ABC, abstractmethod class AbstractOperation(ABC): def __init__(self, operand_a, operand_b): self.operand_a = operand_a self.operand_b = operand_b super(AbstractOperation, self).__init__() @abstractmethod def execute(self): pass

+ _thread — Low-level threading API (June 24, 2020, 3:26 p.m.)

This module provides low-level primitives for working with multiple threads also called light-weight processes or tasks). The "threading" module provides an easier to use and higher-level threading API built on top of this module.

+ __main__ Python Main Function (June 22, 2020, 3:22 p.m.)

Python Main Function is a starting point of any program. When the program is run, the python interpreter runs the code sequentially. The "main" function is executed only when it is run as a Python program. It will not run the main function if it imported as a module. ------------------------------------------------------------------- __name__: Every module in Python has a special attribute called __name__. It is a built-in variable that returns the name of the module. __main__: Like other programming languages, Python too has an execution entry point, i.e., main. '__main__' is the name of the scope in which top-level code executes. Basically you have two ways of using a Python module: Run it directly as a script, or import it. When a module is run as a script, its __name__ is set to __main__. Thus, the value of the __name__ attribute is set to __main__ when the module is run as the main program. Otherwise, the value of __name__ is set to contain the name of the module. -------------------------------------------------------------------

+ __future__ module (June 22, 2020, 1:47 p.m.)

The __future__ module is used to make functionality available in the current version of Python even though it will only be officially introduced in a future version. For example, from __future__ import with_statement allows you to use the with statement in Python 2.5 but it is part of the language as of Python 2.6

+ print (June 22, 2020, 1:05 p.m.)

If you don’t want characters prefaced by \ to be interpreted as special characters, you can use raw strings by adding an r before the first quote: print('C:\some\name') # here \n means newline! print(r'C:\some\name') # note the r before the quote

+ Source Code Encoding (June 22, 2020, 12:54 p.m.)

Source Code Encoding: -*- coding: encoding -*- For example: # -*- coding: cp1252 -*- #!/usr/bin/env python3 # -*- coding: cp1252 -*-

+ Try..Except..Else..Finally (June 8, 2020, 3:22 p.m.)

try: data = something_that_can_go_wrong() except IOError: handle_exception() else: do_stuff(data) finally: clean_up() -------------------------------------------------------------------- The "try" block lets you test a block of code for errors. The "except" block lets you handle the error. You can use the "else" keyword to define a block of code to be executed if no errors were raised The "finally" block lets you execute code, regardless of the result of the try and except blocks. --------------------------------------------------------------------

+ Traceback exceptions (April 25, 2020, 8:10 a.m.)

import traceback try: pass except Exception: print(traceback.print_exc())

+ Deserialize and Serialize (April 7, 2020, 10:07 a.m.)

Serialization means to convert an object into a string, and deserialization is its inverse operation (convert string -> object). Serialization is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later. The opposite operation, extracting a data structure from a series of bytes, is deserialization.

+ random (March 9, 2020, 10:17 a.m.)

random.choice(a_list) # For getting one item ---------------------------------------------------------------------- random.choices(a_list, k=3) # For getting 3 items (May get duplicate items) ---------------------------------------------------------------------- random.sample(a_list, k=3) # For unique 3 items ---------------------------------------------------------------------- random.randint(1, 40) ----------------------------------------------------------------------

+ Create Excel Files (March 7, 2020, 4:23 p.m.) --------------------------------------------------------------------------- pip install xlsxwriter --------------------------------------------------------------------------- import xlsxwriter # Create a workbook and add a worksheet. workbook = xlsxwriter.Workbook('MyExcelFile.xlsx') worksheet = workbook.add_worksheet() # Some data we want to write to the worksheet. expenses = ( ['Rent', 1000], ['Gas', 100], ['Food', 300], ['Gym', 50], ) # Start from the first cell. Rows and columns are zero indexed. row = 0 col = 0 # Iterate over the data and write it out row by row. for item, cost in (expenses): worksheet.write(row, col, item) worksheet.write(row, col + 1, cost) row += 1 # Write a total using a formula. worksheet.write(row, 0, 'Total') worksheet.write(row, 1, '=SUM(B1:B4)') workbook.close() ---------------------------------------------------------------------------

+ Docstrings (Jan. 15, 2020, 1:57 p.m.)

+ Primitive and Non-Primitive Data Structures (Jan. 4, 2020, 9:05 a.m.)

The primitive or basic data structures are the building blocks for data manipulation. They contain pure and simple values of data. In Python there are four types of primitive variable: Integers, Float, Strings, Boolean ----------------------------------------------------------------------- Non-primitive not just store a value, but rather a collection of values in various formats. The non-primitive data structures are further divided: Arrays, Lists, Files

+ Files, Directories, Path (Dec. 26, 2019, 12:31 p.m.)

Create a directory: from pathlib import Path Path('/home/mohsen/Temp/').mkdir(parents=True, exist_ok=True) --------------------------------------------------------------------------- Check If File Exists: import os.path os.path.isfile(fname) os.path.exists("/etc") ---------------------------------- from pathlib import Path my_file = Path("/path/to/file") if my_file.is_file(): # file exists --------------------------------------------------------------------------- Check If Directory Exists: from pathlib import Path my_dir = Path("/path/to/directory") if my_dir.is_dir(): # directory exists os.path.exists("/etc") --------------------------------------------------------------------------- Rename a file: from pathlib import Path Path('.editorconfig').rename('src/.editorconfig') --------------------------------------------------------------------------- Create a nested directory if it does not exist: import os os.makedirs(path, exist_ok=True) --------------------------------------------------------------------------- Copy a file: from shutil import copyfile copyfile(src, dst) Function Copies metadata Copies permissions Can use buffer Destination may be directory shutil.copy No Yes No Yes shutil.copyfile No No No No shutil.copy2 Yes Yes No Yes shutil.copyfileobj No No Yes No --------------------------------------------------------------------------- Move a file: shutil.move(src, dst) Move a file, override if already exists: shutil.move(src, 'dst/file_name') If you specify the full path to the destination (not just the directory) then shutil.move will overwrite any existing file: --------------------------------------------------------------------------- Get the absolute path of this current python file: (The directory of the script getting run) import pathlib here = str(pathlib.Path(__file__).parent.absolute()) Get the current working directory: str(pathlib.Path().absolute()) --------------------------------------------------------------------------- Move all text files to an archive directory: import glob import os import shutil for file_name in glob.glob('*.txt'): new_path = os.path.join('archive', file_name) shutil.move(file_name, new_path) ---------------------------------------------------------------------------

+ Yield (Dec. 5, 2019, 6:21 p.m.)

Yield is a keyword that is used like a return, except the function will return a generator. def createGenerator(): mylist = range(3) for i in mylist: yield i * i mygenerator = createGenerator() # create a generator print(mygenerator) # mygenerator is an object! <generator object createGenerator at 0xb7555c34> for i in mygenerator: print(i) 0 1 4

+ Generators (Dec. 5, 2019, 6:18 p.m.)

Generators are iterators, but you can only iterate over them once. It’s because they do not store all the values in memory, they generate the values on the fly. mygenerator = (x * x for x in range(3)) for i in mygenerator: print(i) 0 1 4 They calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.

+ Unit test and Test cases (Nov. 7, 2019, 5:19 p.m.)

Unit testing checks if all specific parts of your function’s behavior are correct, which will make integrating them together with other parts much easier. A test case is a collection of unit tests that together proves that a function works as intended, inside a full range of situations in which that function may find itself and that it’s expected to handle. Test case should consider all possible kinds of input a function could receive from users and therefore should include tests to represent each of these situations.

+ pipenv (Nov. 6, 2019, 9:15 p.m.)

pip3 install pipenv This will install the latest version. -------------------------------------------------------------- This will NOT install the latest version: apt install pipenv -------------------------------------------------------------- pipenv --python 3.7 -------------------------------------------------------------- pipenv shell pipenv install django -------------------------------------------------------------- Exit from an environment: exit -------------------------------------------------------------- Delete an environment: cd to the project directory, and: pipenv --rm -------------------------------------------------------------- Get virtual envirement path: pipenv --venv -------------------------------------------------------------- pipenv --where Find out where your project home is -------------------------------------------------------------- Install packages listed in Pipfile file: pipenv install -------------------------------------------------------------- Check all available versions of a package: pipenv install xlrd== -------------------------------------------------------------- Update packages listed in Pipfile file: pipenv update -------------------------------------------------------------- Package version examples: django = ">=2.0" requests = ">=2.21.0" django = "==1.8.19" django-cleanup = "==2.1" pillow = "==6.0" numpy = ">=1.14.1,<1.15" -------------------------------------------------------------- Export a requirements.txt: pipenv lock --requirements > req.txt pipenv lock -r > requirements.txt pipenv lock -r -d > dev-requirements.txt -------------------------------------------------------------- pipenv lock This will create/update your Pipfile.lock, which you’ll never need to (and are never meant to) edit manually. You should always use the generated file. -------------------------------------------------------------- pipenv install --ignore-pipfile This tells Pipenv to ignore the Pipfile for installation and use what’s in the Pipfile.lock. Given this Pipfile.lock, Pipenv will create the exact same environment you had when you ran pipenv lock, sub-dependencies, and all. -------------------------------------------------------------- pipenv graph This command will print out a tree-like structure showing your dependencies. pipenv graph --reverse You can reverse the tree to show the sub-dependencies with the parent that requires it. This reversed tree may be more useful when you are trying to figure out conflicting sub-dependencies. -------------------------------------------------------------- pipenv open xlrd This will open the xlrd package in the default editor, or you can specify a program with an EDITOR environmental variable. export EDITOR=geany -------------------------------------------------------------- pipenv run <insert command here> -------------------------------------------------------------- pipenv uninstall numpy -------------------------------------------------------------- pipenv uninstall --all Completely wipe all the installed packages from your virtual environment. You can replace --all with --all-dev to just remove dev packages. -------------------------------------------------------------- Pipenv supports the automatic loading of environmental variables when a .env file exists in the top-level directory. That way, when you pipenv shell to open the virtual environment, it loads your environmental variables from the file. The .env file just contains key-value pairs: SOME_ENV_CONFIG=some_value SOME_OTHER_ENV_CONFIG=some_other_value -------------------------------------------------------------- How to convert a requirements.txt to a Pipfile? If you run pipenv install it should automatically detect the requirements.txt and convert it to a Pipfile. pipenv install -r requirements.txt pipenv install -r dev-requirements.txt --dev -------------------------------------------------------------- --envs Output Environment Variable options. -------------------------------------------------------------- --bare Minimal output. -------------------------------------------------------------- pipenv clean --dry-run Uninstalls all packages not specified in Pipfile.lock. --dry-run Just output unneeded packages. -------------------------------------------------------------- --------------------------------------------------------------

+ Doc Strings (Oct. 9, 2019, 12:44 a.m.)

def __init__(self, type1=None, type2=None): # known special case of super.__init__ """ super() -> same as super(__class__, <first argument>) super(type) -> unbound super object super(type, obj) -> bound super object; requires isinstance(obj, type) super(type, type2) -> bound super object; requires issubclass(type2, type) Typical use to call a cooperative superclass method: class C(B): def meth(self, arg): super().meth(arg) This works for class methods too: class C(B): @classmethod def cmeth(cls, arg): super().cmeth(arg) # (copied from class doc) """ pass

+ Selenium (Oct. 9, 2019, 12:35 a.m.)

mozilla/geckodriver drivers: Copy geckodriver in /usr/local/bin ---------------------------------------------------------------- Chrome: ---------------------------------------------------------------- List of Chrome preferences: ---------------------------------------------------------------- List of Firefox preferences: ---------------------------------------------------------------- Efficient Web Crawling: ----------------------------------------------------------------

+ Generate random Hex colors (Oct. 9, 2019, 12:35 a.m.)

import random r = lambda: random.randint(0,255) print('#%02X%02X%02X' % (r(),r(),r())) ---------------------------------------------------------------------- import random color = "%06x" % random.randint(0, 0xFFFFFF) ----------------------------------------------------------------------

+ requests over SOCKS proxy (Oct. 9, 2019, 12:34 a.m.)

pip install pysocks proxies = { 'http': 'socks5h://', 'https': 'socks5h://' } request = requests.get('', proxies=proxies) --------------------------------------------------------------- Using socks5h will make sure that DNS resolution happens over the proxy instead of on the client-side. ---------------------------------------------------------------

+ PEP (Oct. 9, 2019, 12:33 a.m.)

PEP stands for Python Enhancement Proposal. A PEP is a design document providing information to the Python community, or describing a new feature for Python or its processes or environment. -------------------------------------------------------- There are three kinds of PEP: 1- A Standards Track PEP describes a new feature or implementation for Python. It may also describe an interoperability standard that will be supported outside the standard library for current Python versions before a subsequent PEP adds standard library support in a future version. 2- An Informational PEP describes a Python design issue, or provides general guidelines or information to the Python community, but does not propose a new feature. Informational PEPs do not necessarily represent a Python community consensus or recommendation, so users and implementers are free to ignore Informational PEPs or follow their advice. 3- A Process PEP describes a process surrounding Python, or proposes a change to (or an event in) a process. Process PEPs are like Standards Track PEPs but apply to areas other than the Python language itself. They may propose an implementation, but not to Python's codebase; they often require community consensus; unlike Informational PEPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Python development. Any meta-PEP is also considered a Process PEP. --------------------------------------------------------

+ Remove file & directories (Oct. 9, 2019, 12:33 a.m.)

os.remove() will remove a file. os.rmdir() will remove an empty directory. shutil.rmtree() will delete a directory and all its contents.

+ Get the file name from a path (Oct. 9, 2019, 12:32 a.m.)

avatar_name = os.path.basename(request.user.avatar.url)

+ Converting Eastern Arabic numbers to Western (Oct. 9, 2019, 12:30 a.m.)

table = { 1776: 48, # 0 1777: 49, # 1 1778: 50, # 2 1779: 51, # 3 1780: 52, # 4 1781: 53, # 5 1782: 54, # 6 1783: 55, # 7 1784: 56, # 8 1785: 57, # 9 } print('۱'.translate(table)) print('۸'.translate(table)) username = ''.join([x for x in username.translate(table)])

+ Image to String conversion (Oct. 9, 2019, 12:22 a.m.)

Convert Image to String: import base64 with open('t.png', 'rb') as imageFile: str = base64.b64encode( ---------------------------------------------------------------- Convert String to Image: fh = open('imageToSave.png', 'wb') fh.write(str.decode('base64')) fh.close() For python3: image_base64 = request.POST['image-data'].split('base64,', 1) fh = open('/home/mohsen/imageToSave.png', 'wb') fh.write(base64.b64decode(image_base64[1])) fh.close() ----------------------------------------------------------------

+ Read/Load a JSON object from a file: (Oct. 9, 2019, 12:20 a.m.)

with open(file_path) as json_file: json_content = json.load(json_file) print('hi', json_content[10])

+ Truncate a long string (Oct. 9, 2019, 12:17 a.m.)

data = data[:75] ---------------------------------------------------------------------- import textwrap textwrap.shorten("Hello world!", width=12) textwrap.shorten("Hello world", width=10, placeholder="...") ----------------------------------------------------------------------

+ Binary data (Oct. 8, 2019, 10:44 p.m.)

Binary: is a number system like Decimal whereas decimal is based on ten and uses the digits zero to nine, binary is actually based on two and so, therefore, can only use the digits zero and one. --------------------------------------------------- with open('binary', 'bw') as bin_file: for i in range(17): bin_file.write(bytes([i])) The last two lines can also be summarized as following: with open('binary', 'bw') as bin_file: bin_file.write(bytes(range(17))) with open('binary', 'br') as binfile: for b in binfile: print(b) --------------------------------------------------- x = 0x20 print(x) ==> 32 y = 0x0a print(y) ==> 10 print(0b00101010) ==> 42 ==> prints binary --------------------------------------------------- for i in range(17): print("{0:>2} in binary is {0:>08b}".format(i)) for i in range(17): print("{0:>2} in hex is {0:>02x}".format(i)) ---------------------------------------------------

+ Shelve (Oct. 8, 2019, 10:43 p.m.)

The shelve provides a shelve and you can think of it as a dictionary but it's actually stored in a file rather than in memory. Like a dictionary, the shelve holds key: value pairs and the values can be anything. The keys must be strings, unlike a dictionary where keys can be immutable objects, such as tuples. All the methods we use with dictionaries can also be used for shelve objects. So it can be really useful to think of them as a persistent dictionary. It's very easy to convert code using a dictionary to use a shelve instead. with'file_name', as my_shelve: my_shelve['a'] = 1 my_shelve['b'] = 2 my_shelve['c'] = 3 my_shelve.get('a') del my_shelve['a'] for key in my_shelve: print(key) You can use it without "with" too! my_shelve ='abc') my_shelve['a'] = 1 . . my_shelve.close()

+ Pickle (Oct. 8, 2019, 10:43 p.m.)

A mechanism for serializing objects called pickling. Serialization: The process that allows objects to be saved to a file so that they can be stored or restored from a file for example. with open('abcd.pickle', 'wb') as pickle_file: pickle.dump(a_tuple_or_any_data, picle_file) with open('abcd.pickle', 'rb') as pickle_file: data = pickle.load(pickle_file

+ re / regex (Oct. 8, 2019, 10:42 p.m.)

Replace text with regex. (Removes one or any space before and after a hyphen): re.sub(r' +- +', '', text) # There must be at lease one space re.sub(r' *- *', '', text) # Zero or more spaces -------------------------------------------------------- re.match('(http|https):', url) url.startswith(('http:', 'https:')) -------------------------------------------------------- Verify string only contains letters, numbers, and underscores: re.match("^[A-Za-z0-9_]*$", username) -------------------------------------------------------- Find extensions using regex: regex = re.compile('^.*\.(\w{3})$') if regex.match('some_text'): print True --------------------------------------------------------'"(.*)"', caller_id).group(0).replace('"', '') -------------------------------------------------------- title_search ='<title>(.*)</title>', html, re.IGNORECASE) -------------------------------------------------------- Match object instances have several methods and attributes; the most important ones are: group() Return the string matched by the RE start() Return the starting position of the match end() Return the ending position of the match span() Return a tuple containing the (start, end) positions of the match -------------------------------------------------------- group() vs groups() groups() only returns any explicitly-captured groups in your regex (denoted by ( round brackets ) in your regex), whereas group(0) returns the entire substring that's matched by your regex regardless of whether your expression has any capture groups. The first explicit capture in your regex is indicated by group(1) instead. -------------------------------------------------------- Why can't search give me all the substrings? search() will only return the first match against the pattern in your input string. -------------------------------------------------------- Get the integer 0 in this string => "999 has 0 calls " calls_count ='%s has (\d+) calls ' % queue, queue_details[0]) if calls_count: calls_count = -------------------------------------------------------- Get the integer in this string => "(0s holdtime, " hold_time =' strategy \((\d+)s holdtime, ', queue_details[0]) if hold_time: hold_time = -------------------------------------------------------- # Answered calls count, unanswered calls count, service level calls_info =' W:(\d+), C:(\d+), A:(\d+), SL:(.*)%, SL2:(.*)% ', queue_details[0]) if calls_info: (queue_weight, answered_calls, unanswered_calls, service_level, calculate_service_level) = calls_info.groups() -------------------------------------------------------- name_pattern = '(?=^.{3,63}$)(?!^(\d+\.)+\d+$)(^(([a-z0-9]|[a-z0-9][a-z0-9\-]*[a-z0-9])\.)*([a-z0-9]|[a-z0-9][a-z0-9\-]*[a-z0-9])$)' re.match(name_pattern, name) OR name_regex = re.compile(name_pattern) if name_regex.match( pass --------------------------------------------------------

+ Sorting data (Oct. 8, 2019, 10:30 p.m.)

Sorting Tuple: stocks = [ # (name, shares, price) ('AA', 100, 32.20), ('IBM', 50, 91.10), ('CAT', 150, 83.44), ('GE', 200, 51.23) ] # Sorts according to the first tuple field (the name) print(sorted(stocks)) >>> [('AA', 100, 32.2), ('CAT', 150, 83.44), ('GE', 200, 51.23), ('IBM', 50, 91.1)] ------------------------------------------------------------ Sorting Tuple: # Sort by shares print(sorted(stocks, key=lambda s: s[1])) >>> [('IBM', 50, 91.1), ('AA', 100, 32.2), ('CAT', 150, 83.44), ('GE', 200, 51.23)] ------------------------------------------------------------ Sorting Tuple: # Sort by price print(sorted(stocks, key=lambda s: s[2])) >>> [('AA', 100, 32.2), ('GE', 200, 51.23), ('CAT', 150, 83.44), ('IBM', 50, 91.1)] ------------------------------------------------------------ Sorting Tuple: # Find the lowest price print(min(stocks, key=lambda s: s[2])) >>> ('AA', 100, 32.2) ------------------------------------------------------------ Sorting Tuple: # Find the maximum number of shares print(max(stocks, key=lambda s: s[1])) >>> ('GE', 200, 51.23) ------------------------------------------------------------ Sorting Tuple: # Find 3 lowest prices import heapq print(heapq.nsmallest(3, stocks, key=lambda s: s[2])) >>> [('AA', 100, 32.2), ('GE', 200, 51.23), ('CAT', 150, 83.44)] ------------------------------------------------------------ Sorting Dictionary: import operator d = {1:2, 7:8, 31:5, 30:5} e = sorted(d.iteritems(), key=operator.itemgetter(1)) Pass the itemgetter 0 to sort by key ------------------------------------------------------------ Sorting Dictionary: import operator d = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} sorted_d = dict(sorted(d.items(), key=operator.itemgetter(1))) sorted_d = dict(sorted(d.items(), key=operator.itemgetter(1), reverse=True)) ------------------------------------------------------------ Sort a list of objects based on an attribute of the objects: # To sort the list in place... my_list.sort(key=lambda x: x.count, reverse=True) # To return a new list, use the sorted() built-in function... my_sorted_list = sorted(my_list, key=lambda x: x.count, reverse=True) ------------------------------------------------------------

+ Manipulating network addresses (Oct. 8, 2019, 10:20 p.m.)

import ipaddress net = ipaddress.IPv4Network('') net >>> IPv4Network('') net.netmask >>> IPv4Address('') for n in net: print(n) >>> a = ipaddress.IPv4Address('') a in net >>> False str(a) >>> '' int(a) >>> 3232236046

+ Formatting text for Terminal (Oct. 8, 2019, 10:15 p.m.)

import textwrap text = 'some long text ...' print(textwrap.fill(text, 40))

+ Get the Terminal width (Oct. 8, 2019, 10:11 p.m.)

import os size = os.get_terminal_size() print(size.columns) print(size.lines)

+ Performance Measurment (Oct. 8, 2019, 10 p.m.)

import time start = time.perf_counter() print('do some stuff...') end = time.perf_counter() print('Took {} seconds!'.format(end - start)) >>> Took 14.458690233001107 seconds! ---------------------------------------------------------- process_time is used to measure elapsed CPU time. start = time.process_time() end = time.process_time() ---------------------------------------------------------- There is also time.monotonic() which provides a monotonic timer where the reported values are guaranteed never to go backward, even if adjustments have been made to the system clock while the program is running. ----------------------------------------------------------

+ Format (Oct. 8, 2019, 9:47 p.m.)

txt = "For only {price:.2f} dollars!" txt.format(price = 49) ----------------------------------------------------------------------- Truncating long strings: Old '%.5s' % ('xylophone',) New '{:.5}'.format('xylophone') Output xylop ----------------------------------------------------------------------- Getitem and Getattr person = {'first': 'Jean-Luc', 'last': 'Picard'} '{p[first]} {p[last]}'.format(p=person) >>> Jean-Luc Picard data = [4, 8, 15, 16, 23, 42] '{d[4]} {d[5]}'.format(d=data) 23 42 class Plant(object): type = 'tree' '{p.type}'.format(p=Plant()) >>> tree class Plant(object): type = 'tree' kinds = [{'name': 'oak'}, {'name': 'maple'}] '{p.type}: {p.kinds[0][name]}'.format(p=Plant()) >>> tree: oak ----------------------------------------------------------------------- Padding numbers Old '%4d' % (42,) New '{:4d}'.format(42) Output ' 42' ----------------------------------------------------------------------- Combining truncating and padding: Old '%-10.5s' % ('xylophone',) New '{:10.5}'.format('xylophone') Output 'xylop ' ----------------------------------------------------------------------- x = 1234567890 print(format(x, ',')) >>> 1,234,567,890 ----------------------------------------------------------------------- from datetime import datetime d = datetime(2019, 5, 21) format(d, '%a, %b %d %m, %Y') >>> Tue, May 21 05, 2019' 'The time is {:%Y-%m-%d}'.format(d) 'The time is 2019-05-21' '{:%Y-%m-%d %H:%M}'.format(datetime(2001, 2, 3, 4, 5)) >>> 2001-02-03 04:05 ----------------------------------------------------------------------- The new-style simple formatter calls by default the __format__() method of an object for its representation. If you just want to render the output of str(...) or repr(...) you can use the !s or !r conversion flags. class Data(object): def __str__(self): return 'str' def __repr__(self): return 'repr' '{0!s} {0!r}'.format(Data()) >>> 'str' 'repr' ASCII Format: class Data(object): def __repr__(self): return 'räpr' '{0!r} {0!a}'.format(Data()) >>> 'räpr' r'\xe4pr' ----------------------------------------------------------------------- 'this is {0} test. {1:>4} {2}'.format('a', 23, 'c') >>> 'this is a test. 23 c' 'Hello {}, How {}, you?'.format('mohsen', 'are') for i in range(17): print("{0:>2} in binary is {0:>08b}".format(i)) for i in range(17): print("{0:>2} in hex is {0:>02x}".format(i)) ----------------------------------------------------------------------- Formatting Types: :< Left aligns the result (within the available space) :> Right aligns the result (within the available space) :^ Center aligns the result (within the available space) := Places the sign to the leftmost position :+ Use a plus sign to indicate if the result is positive or negative :- Use a minus sign for negative values only : Use one space to insert an extra space before positive numbers (and a minus sign before negative numbers) :, Use a comma as a thousand separator :_ Use an underscore as a thousand separator :b Binary format :c Converts the value into the corresponding Unicode character :d Decimal format :e Scientific format, with a lower case e :E Scientific format, with an upper case E :f Fixpoint number format :F Fixpoint number format, in uppercase format (show inf and nan as INF and NAN) :g General format :G General format (using an upper case E for scientific notations) :o Octal format :x Hex format, lower case :X Hex format, upper case :n Number format :% Percentage format -----------------------------------------------------------------------

+ Sets (Oct. 8, 2019, 5:49 p.m.)

x = set(['foo', 'bar', 'baz', 'foo', 'qux']) >>> x {'qux', 'foo', 'bar', 'baz'} >>> x = set(('foo', 'bar', 'baz', 'foo', 'qux')) >>> x {'qux', 'foo', 'bar', 'baz'} To create an empty set u must use set(), as {} creates an empty dictionary. They are unordered, which means that they can't be indexed. They cannot contain duplicate elements. Due to the way they're stored, it's faster to check whether an item is part of a set, rather than part of a list Instead of using append to add to a set, use add. The method remove removes a specific element from a set; pop removes an arbitrary element. Sets can be combined using mathematical operations. The union operator | combines two sets to form a new one containing items in either. The intersection operator & gets items only in both. The difference operator - gets items in the first set but not in the second. The symmetric difference operator ^ gets items in either set, but not both. When to use a dictionary: - When you need a logical association between a key: value pair. - When you need a fast lookup for your data, based on a custom key. - When your data is being constantly modified. Remember, dictionaries are mutable. When to use the other types: - Use lists if you have a collection of data that does not need random access. Try to choose lists when you need a simple, iterable collection that is modified frequently. - Use a set if you need uniqueness for the elements. - Use tuples when your data cannot change. x1 = {'foo', 'bar', 'baz'} x2 = {'baz', 'qux', 'quux'} >>> x1.union(x2) {'baz', 'quux', 'qux', 'bar', 'foo'} >>> x1 | x2 {'baz', 'quux', 'qux', 'bar', 'foo'} >>> x1.intersection(x2) {'baz'} >>> x1 & x2 {'baz'} >>> x1.difference(x2) {'foo', 'bar'} >>> x1 - x2 {'foo', 'bar'} x1.symmetric_difference(x2) and x1 ^ x2 return the set of all elements in either x1 or x2, but not both: >>> x1.symmetric_difference(x2) {'foo', 'qux', 'quux', 'bar'} >>> x1 ^ x2 {'foo', 'qux', 'quux', 'bar'} x1.isdisjoint(x2) returns True if x1 and x2 have no elements in common: >>> x1.isdisjoint(x2) False >>> x1.issubset({'foo', 'bar', 'baz', 'qux', 'quux'}) True A set is considered to be a subset of itself: >>> x = {1, 2, 3, 4, 5} >>> x.issubset(x) True >>> x <= x True x1 < x2 returns True if x1 is a proper subset of x2: >>> x1 = {'foo', 'bar'} >>> x2 = {'foo', 'bar', 'baz'} >>> x1 < x2 True >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'bar', 'baz'} >>> x1 < x2 False While a set is considered a subset of itself, it is not a proper subset of itself: >>> x = {1, 2, 3, 4, 5} >>> x <= x True >>> x < x False x1.issuperset(x2) and x1 >= x2 return True if x1 is a superset of x2: >>> x1 = {'foo', 'bar', 'baz'} >>> x1.issuperset({'foo', 'bar'}) True >>> x2 = {'baz', 'qux', 'quux'} >>> x1 >= x2 False You have already seen that a set is considered a subset of itself. A set is also considered a superset of itself: >>> x = {1, 2, 3, 4, 5} >>> x.issuperset(x) True >>> x >= x True x1 > x2 returns True if x1 is a proper superset of x2: >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'bar'} >>> x1 > x2 True >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'bar', 'baz'} >>> x1 > x2 False A set is not a proper superset of itself: >>> x = {1, 2, 3, 4, 5} >>> x > x False >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'baz', 'qux'} >>> x1 |= x2 >>> x1 {'qux', 'foo', 'bar', 'baz'} >>> x1.update(['corge', 'garply']) >>> x1 {'qux', 'corge', 'garply', 'foo', 'bar', 'baz'} >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'baz', 'qux'} >>> x1 &= x2 >>> x1 {'foo', 'baz'} >>> x1.intersection_update(['baz', 'qux']) >>> x1 {'baz'} >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'baz', 'qux'} >>> x1 -= x2 >>> x1 {'bar'} >>> x1.difference_update(['foo', 'bar', 'qux']) >>> x1 set() >>> x1 = {'foo', 'bar', 'baz'} >>> x2 = {'foo', 'baz', 'qux'} >>> >>> x1 ^= x2 >>> x1 {'bar', 'qux'} >>> >>> x1.symmetric_difference_update(['qux', 'corge']) >>> x1 {'bar', 'corge'} >>> x = {'foo', 'bar', 'baz'} >>> x.add('qux') >>> x {'bar', 'baz', 'foo', 'qux'} >>> x = {'foo', 'bar', 'baz'} >>> x.remove('baz') >>> x {'bar', 'foo'} >>> x.remove('qux') Traceback (most recent call last): File "<pyshell#58>", line 1, in <module> x.remove('qux') KeyError: 'qux' >>> x = {'foo', 'bar', 'baz'} >>> x.discard('baz') >>> x {'bar', 'foo'} >>> x.discard('qux') >>> x {'bar', 'foo'} x.pop() removes and returns an arbitrarily chosen element from x. If x is empty, x.pop() raises an exception: >>> x = {'foo', 'bar', 'baz'} >>> x.pop() 'bar' >>> x {'baz', 'foo'} >>> x.pop() 'baz' >>> x {'foo'} >>> x.pop() 'foo' >>> x set() >>> x.pop() Traceback (most recent call last): File "<pyshell#82>", line 1, in <module> x.pop() KeyError: 'pop from an empty set' x.clear() removes all elements from x: >>> x = {'foo', 'bar', 'baz'} >>> x {'foo', 'bar', 'baz'} >>> >>> x.clear() >>> x set() Frozen Sets Python provides another built-in type called a frozenset, which is in all respects exactly like a set, except that a frozenset is immutable. You can perform non-modifying operations on a frozenset: >>> x = frozenset(['foo', 'bar', 'baz']) >>> x frozenset({'foo', 'baz', 'bar'}) >>> len(x) 3 >>> x & {'baz', 'qux', 'quux'} frozenset({'baz'}) But methods that attempt to modify a frozenset fail: >>> x = frozenset(['foo', 'bar', 'baz']) >>> x.add('qux') Traceback (most recent call last): File "<pyshell#127>", line 1, in <module> x.add('qux') AttributeError: 'frozenset' object has no attribute 'add' >>> x.pop() Traceback (most recent call last): File "<pyshell#129>", line 1, in <module> x.pop() AttributeError: 'frozenset' object has no attribute 'pop' >>> x.clear() Traceback (most recent call last): File "<pyshell#131>", line 1, in <module> x.clear() AttributeError: 'frozenset' object has no attribute 'clear' >>> x frozenset({'foo', 'bar', 'baz'}) Deep Dive: Frozensets and Augmented Assignment Since a frozenset is immutable, you might think it can’t be the target of an augmented assignment operator. But observe: >>> f = frozenset(['foo', 'bar', 'baz']) >>> s = {'baz', 'qux', 'quux'} >>> f &= s >>> f frozenset({'baz'}) What gives? Python does not perform augmented assignments on frozensets in place. The statement x &= s is effectively equivalent to x = x & s. It isn’t modifying the original x. It is reassigning x to a new object, and the object x originally referenced is gone. You can verify this with the id() function: >>> f = frozenset(['foo', 'bar', 'baz']) >>> id(f) 56992872 >>> s = {'baz', 'qux', 'quux'} >>> f &= s >>> f frozenset({'baz'}) >>> id(f) 56992152 f has a different integer identifier following the augmented assignment. It has been reassigned, not modified in place. Some objects in Python are modified in place when they are the target of an augmented assignment operator. But frozensets aren’t. Frozensets are useful in situations where you want to use a set, but you need an immutable object. For example, you can’t define a set whose elements are also sets, because set elements must be immutable: >>> x1 = set(['foo']) >>> x2 = set(['bar']) >>> x3 = set(['baz']) >>> x = {x1, x2, x3} Traceback (most recent call last): File "<pyshell#38>", line 1, in <module> x = {x1, x2, x3} TypeError: unhashable type: 'set' If you really feel compelled to define a set of sets (hey, it could happen), you can do it if the elements are frozensets, because they are immutable: >>> x1 = frozenset(['foo']) >>> x2 = frozenset(['bar']) >>> x3 = frozenset(['baz']) >>> x = {x1, x2, x3} >>> x {frozenset({'bar'}), frozenset({'baz'}), frozenset({'foo'})} Likewise, recall from the previous tutorial on dictionaries that a dictionary key must be immutable. You can’t use the built-in set type as a dictionary key: >>> x = {1, 2, 3} >>> y = {'a', 'b', 'c'} >>> >>> d = {x: 'foo', y: 'bar'} Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> d = {x: 'foo', y: 'bar'} TypeError: unhashable type: 'set' If you find yourself needing to use sets as dictionary keys, you can use frozensets: >>> x = frozenset({1, 2, 3}) >>> y = frozenset({'a', 'b', 'c'}) >>> >>> d = {x: 'foo', y: 'bar'} >>> d {frozenset({1, 2, 3}): 'foo', frozenset({'c', 'a', 'b'}): 'bar'}

+ Connect to PostgreSQL (Sept. 12, 2016, 7:56 p.m.)

import psycopg2 from psycopg2.extras import DictCursor connection = psycopg2.connect(database="postgres", user="postgres", password="postgres", port=5432) cur = connection.cursor(cursor_factory=DictCursor) cur.execute("""SELECT * from teacher where teacher_id='203'""") rec = cur.fetchone() print(rec['id']) ---------------------------------------------------- If you're doing an insertion or creating a table, you need to commit at the end: connection.commit() ----------------------------------------------------

+ Python list subtraction (April 25, 2016, 11:17 p.m.)

list1 = ['a', 'b', 'c', 'd'] list2 = ['b', 'c'] list3 = list(set(list1) - set(list2))

+ Add leading zeroes to numbers (Jan. 26, 2016, 7:20 p.m.)

str(1).zfill(4) '0001'

+ Group a list of dictionaries (Dec. 20, 2015, 2:08 p.m.)

from itertools import groupby d = [{'a': 1}, {'a': 2}, {'a': 2, 'a': 3}, {'a': 3}, {'a': 3}] [(name, list(group)) for name, group in groupby(d, lambda p:p['a'])]

+ Running Shell Commands (Oct. 5, 2015, 2:34 p.m.)

import subprocess Use this if you need to run a command using `sudo`: passwd = subprocess.Popen(['echo', 'Mohsen123'], stdout=subprocess.PIPE) ------------------------------------------------------------------------- def run_command(command, password=None, return_list=True): if not password: p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) else: p = subprocess.Popen(command, stdin=password.stdout, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) # The result is in bytes by default, so it should get converted to utf-8. # There is a "\n" at the end of each line. Let's get rid of them too. result = [x.decode('utf-8').replace('\n', '') for x in p.stdout.readlines()] if return_list: return result else: if result: return result[0] ------------------------------------------------------------------------- Another command example: cmd = 'sudo -S asterisk -rx "core show channels verbose " | grep "from-sip"' -------------------------------------------------------------------------

+ Threading (June 18, 2015, 9:46 a.m.)

import threading some_threads = [] some_threads.append(threading.Thread(target=save_sheet_to_db, args=(session, sheet, carrier))) for some_thread in some_threads: some_thread.start() for some_thread in some_threads: some_thread.join()

+ Iterate through two lists in inner list (Dec. 25, 2014, 9:52 a.m.)

[[3, 3, 7, 8], ['a', 'b', 'd', 3]] [y for x in d.values() for y in x] [3, 3, 7, 8, 'a', 'b', 'd', 3]

+ Limiting floats to two decimal points (Nov. 15, 2014, 1:11 p.m.)

f = 1000.1234 round(f, 2)

+ Get all object attributes (Nov. 15, 2014, 12:26 p.m.)


+ Requests (Nov. 1, 2014, 11:04 a.m.)

Working with JSON responses: import json import urllib2 data = json.load(urllib2.urlopen('http://someurl/path/to/json')) ---------------------------------------------------------------- import requests r = requests.get('') r.json() [{u'repository': {u'open_issues': 0, u'url': ' ---------------------------------------------------------------- import json import requests url = '' params = dict( origin='Chicago,IL', destination='Los+Angeles,CA', waypoints='Joplin,MO|Oklahoma+City,OK', sensor='false' ) resp = requests.get(url=url, params=params) data = json.loads(resp.text) ---------------------------------------------------------------- r = requests.get('') ---------------------------------------------------------------- Response Code We can check the response status code, and do a status code lookup with the dictionary look-up object. r = requests.get('') r.status_code >>200 r.status_code == >>> True['temporary_redirect'] >>> 307 >>> 418['\o/'] >>> 200 ---------------------------------------------------------------- Get the content Get the content of the server's response. import requests r = requests.get('') print r.text # Requests also comes with a builtin JSON decoder, in case you’re dealing with JSON data import requests r = requests.get('') print r.json ---------------------------------------------------------------- Headers We can view the server’s response headers using a Python dictionary, and we can access the headers using any capitalization we want. If a header doesn't exist in the Response, its value defaults to None r.headers { 'status': '200 OK', 'content-encoding': 'gzip', 'transfer-encoding': 'chunked', 'connection': 'close', 'server': 'nginx/1.0.4', 'x-runtime': '148ms', 'etag': '"e1ca502697e5c9317743dc078f67693f"', 'content-type': 'application/json; charset=utf-8' } r.headers['Content-Type'] >>>'application/json; charset=utf-8' r.headers.get('content-type') >>>'application/json; charset=utf-8' r.headers['X-Random'] >>>None # Get the headers of a given URL resp = requests.head("") print resp.status_code, resp.text, resp.headers ---------------------------------------------------------------- Encoding Requests will automatically decode content from the server. Most Unicode charsets are seamlessly decoded. When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property: If you change the encoding, Requests will use the new value of r.encoding whenever you call r.text. print r.encoding >> utf-8 >>> r.encoding = 'ISO-8859-1' Custom Headers If you’d like to add HTTP headers to a request, simply pass in a dict to the headers parameter. import json url = '' payload = {'some': 'data'} headers = {'content-type': 'application/json'} r =, data=json.dumps(payload), headers=headers) Redirection and History Requests will automatically perform location redirection while using the GET and OPTIONS verbs. GitHub redirects all HTTP requests to HTTPS. ---------------------------------------------------------------- You can use other HTTP requests types as well (PUT, DELETE, HEAD and OPTIONS) r = requests.put("") r = requests.delete("") r = requests.head("") r = requests.options("") # This small script creates a Github repo. import requests, json github_url = "" data = json.dumps({'name':'test', 'description':'some test repo'}) r =, data, auth=('user', '*****')) print r.json ---------------------------------------------------------------- Errors and Exceptions In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a ConnectionError exception. In the event of the rare invalid HTTP response, Requests will raise an HTTPError exception. If a request times out, a Timeout exception is raised. If a request exceeds the configured number of maximum redirections, a TooManyRedirects exception is raised. All exceptions that Requests explicitly raises inherit from requests.exceptions.RequestException. ----------------------------------------------------------------

+ type (Oct. 10, 2014, 1:22 a.m.)

The first use of type() is the most widely known and used: to determine the type of an object. Here, Python novices commonly interrupt and say, "But I thought Python didn't have types!" On the contrary, everything in Python has a type (even the types!) because everything is an object. Let's look at a few examples: >>> type(1) <class 'int'> >>> type('foo') <class 'str'> >>> type(3.0) <class 'float'> >>> type(float) <class 'type'> The type of type Everything is as expected, until we check the type of float. <class 'type'>? What is that? Well, odd, but let's continue: >>> class Foo(object): ... pass ... >>> type(Foo) <class 'type'> Ah! <class 'type'> again. Apparently the type of all classes themselves is type (regardless of if they're built-in or user-defined). What about the type of type itself? >>> type(type) <class 'type'> Well, it had to end somewhere. type is the type of all types, including itself. In actuality, type is a metaclass, or "a thing that builds classes". Classes, like list(), build instances of that class, as in my_list = list(). In the same way, metaclasses build types, like Foo in: class Foo(object): pass As mentioned, it turns out that type has a totally separate use, when called with three arguments. type(name, bases, dict) creates a new type, programmatically. If I had the following code: class Foo(object): pass We could achieve the exact same effect with the following: Foo = type('Foo', (), {}) Foo is now referencing a class named "Foo", whose base class is object (classes created with type, if specified without a base class, are automatically made new-style classes). That's all well and good, but what if we want to add member functions to Foo? This is easily achieved by setting attributes of Foo, like so: def always_false(self): return False Foo.always_false = always_false We could have done it all in one go with the following: Foo = type('Foo', (), {'always_false': always_false}) Of course, the bases parameter is a list of base classes of Foo. We've been leaving it empty, but it's perfectly valid to create a new class derived from Foo, again using type to create it: FooBar = type('FooBar', (Foo), {})

+ Read Excel Files (Aug. 22, 2014, 11:34 a.m.)

Read Excel files from Python Use the excellent xlrd package, which works on any platform. That means you can read Excel files from Python in Linux! Example usage: Open the workbook import xlrd wb = xlrd.open_workbook('myworkbook.xls') Check the sheet names wb.sheet_names() wb.sheets() Get the first sheet either by index or by name sh = wb.sheet_by_index(0) sh = wb.sheet_by_name(u'Sheet1') Iterate through rows, returning each as a list that you can index: for rownum in range(sh.nrows): print sh.row_values(rownum) If you just want the first column: first_column = sh.col_values(0) Index individual cells: cell_A1 = sh.cell(0,0).value cell_C4 = sh.cell(rowx=3,colx=2).value (Note Python indices start at zero but Excel starts at one) #sheet = book.sheet_by_index(0) #print sheet.cell(13, 12).value #print sheet.row_values(10) #print

+ Get Python version (Aug. 22, 2014, 11:11 a.m.)

Get python version: import sys sys.version

+ Installation (Feb. 4, 2016, 8:35 a.m.)

1-Install these packages: apt install libbz2-dev libsqlite3-dev python3-dev libedit-dev libreadline-dev libssl-dev make build-essential zlib1g-dev libffi-dev For CentOS: yum install yum install bzip2-devel bzip2-libs python-devel openssl-devel zlib-devel ncurses-devel sqlite-devel readline-devel gdbm-devel db4-devel libpcap-devel xz-devel 2- Download the python version you need: (Download the tgz file) 3- tar xf Python-3.10.6.tgz && cd Python-3.10.6 ./configure --prefix=/usr/local --enable-shared --enable-unicode=ucs4 LDFLAGS="-Wl,--rpath=/usr/local/lib" 4- Build the source code, and install: make -j4 In case of getting errors for missing _ssl module, refer to the end of this note to download and pass the path of openssl library. sudo make install ln /usr/local/lib/ /usr/lib64/ --------------------------------------------------------------------------------- At the end of the installation if you got the error: Ignoring ensurepip failure: pip 9.0.1 requires SSL/TLS You need to install the following package: apt-get install libssl1.0 and then: make -j4 make install --------------------------------------------------------------------------------- For compiling _ssl module in Python you need to download the OpenSSL source package, extract it and pass the path to the "./configre" step of Python installation. 1- Download the latest version from the following link: 2- Pass the path like this: ./configure --prefix=/usr/local --enable-shared --enable-unicode=ucs4 LDFLAGS="-Wl,--rpath=/usr/local/lib" --with-openssl=/usr/src/openssl-3.0.3 --with-ssl-default-suites=openssl CFLAGS="-I/usr/src/openssl-3.0.3/include" LDFLAGS="-L/usr/src/openssl-3.0.3/" ---------------------------------------------------------------------------------

+ Virtualenv (Feb. 4, 2016, 8:04 a.m.)

Installation: apt install python3-pip pip install virtualenv -------------------------------------------------------------------- Usages: 1- mkdir ~/.virtualenvs 2- virtualenv -p /usr/bin/python3 ~/.virtualenvs/django-3 3- source ~/.virtualenvs/django3/bin/activate -------------------------------------------------------------------- Find the path to the virtualenv (when it's already activated): echo $VIRTUAL_ENV --------------------------------------------------------------------

+ PIP (Aug. 22, 2014, 7:44 a.m.)

pip install SomePackage # latest version pip install SomePackage==1.0.4 # specific version pip install 'SomePackage>=1.0.4' # minimum version pip install -r requirements.txt pip install --upgrade SomePackage ------------------------------------------------------------------------ Install a package with setuptools extras. pip install SomePackage[PDF] pip install SomePackage[PDF]==3.0 pip install -e .[PDF]==3.0 # editable project in current directory ------------------------------------------------------------------------ Install a particular source archive file. pip install ./downloads/SomePackage-1.0.4.tar.gz pip install http://my.package.repo/ ------------------------------------------------------------------------ Install from alternative package repositories. (Install from a different index, and not PyPI): pip install --index-url http://my.package.repo/simple/ SomePackage Search an additional index during install, in addition to PyPI: pip install --extra-index-url http://my.package.repo/simple SomePackage Install from a local flat directory containing archives (and don’t scan indexes): pip install --no-index --find-links:file:///local/dir/ SomePackage pip install --no-index --find-links:/local/dir/ SomePackage pip install --no-index --find-links:relative/dir/ SomePackage ------------------------------------------------------------------------ Find pre-release and development versions, in addition to stable versions. By default, pip only finds stable versions. pip install --pre SomePackage -------------------------------------------------------------------------- pip uninstall [options] <package> ... pip uninstall [options] -r <requirements file> ... Options: -r, --requirement <file> Uninstall all the packages listed in the given requirements file. This option can be used multiple times. -y, --yes Don't ask for confirmation of uninstalling deletions. -------------------------------------------------------------------------- pip freeze [options] Description: Output installed packages in requirements format. Options: -r, --requirement <file> Use the order in the given requirements file and it’s comments when generating output. -f, --find-links <url> URL for finding packages, which will be added to the output. -l, --local If in a virtualenv that has global access, do not output globally-installed packages. Examples: Generate output suitable for a requirements file. $ pip freeze Jinja2==2.6 Pygments==1.5 Sphinx==1.1.3 docutils==0.9.1 Generate a requirements file and then install from it in another environment. $ env1/bin/pip freeze > requirements.txt $ env2/bin/pip install -r requirements.txt -------------------------------------------------------------------------- pip list [options] Description: List installed packages, including editable ones. Options: -o, --outdated List outdated packages (excluding editables) -u, --uptodate List up-to-date packages (excluding editables) -e, --editable List editable projects. -l, --local If in a virtualenv that has global access, do not list globally-installed packages. --pre Include pre-release and development versions. By default, pip only finds stable versions. Examples: List installed packages. $ pip list Pygments (1.5) docutils (0.9.1) Sphinx (1.1.2) Jinja2 (2.6) List outdated packages (excluding editables), and the latest version available $ pip list --outdated docutils (Current: 0.9.1 Latest: 0.10) Sphinx (Current: 1.1.2 Latest: 1.1.3) -------------------------------------------------------------------------- pip show [options] <package> ... Description: Show information about one or more installed packages. Options: -f, --files Show the full list of installed files for each package. Examples: Show information about a package: $ pip show sphinx `the output will be`: Name: Sphinx Version: 1.1.3 Location: /my/env/lib/pythonx.x/site-packages Requires: Pygments, Jinja2, docutils -------------------------------------------------------------------------- pip search [options] <query> Description: Search for PyPI packages whose name or summary contains <query>. Options: --index <url> Base URL of Python Package Index (default Examples: Search for “peppercorn” pip search peppercorn pepperedform - Helpers for using peppercorn with formprocess. peppercorn - A library for converting a token stream into [...] -------------------------------------------------------------------------- pip zip [options] <package> ... Description: Zip individual packages. Options: --unzip Unzip (rather than zip) a package. --no-pyc Do not include .pyc files in zip files (useful on Google App Engine). -l, --list List the packages available, and their zip status. --sort-files With –list, sort packages according to how many files they contain. --path <paths> Restrict operations to the given paths (may include wildcards). -n, --simulate Do not actually perform the zip/unzip operation. -------------------------------------------------------------------------- This command will download the zipped/tar file in the specified location: pip download `package_name` pip download \ --only-binary=:all: \ --platform linux_x86_64 \ --python-version 33 \ --implementation cp \ --abi cp34m \ pip>=8 pip download \ --only-binary=:all: \ --platform macosx-10_10_x86_64 \ --python-version 27 \ --implementation cp \ SomePackage -------------------------------------------------------------------------- pip install --allow-all-external pil --allow-unverified pil -------------------------------------------------------------------------- ReadTimeoutError: HTTPSConnectionPool(host='', port=443) pip install --default-timeout=200 <package_name> -------------------------------------------------------------------------- pip install pip-review pip-review --local --interactive -------------------------------------------------------------------------- mkdir pip_files && cd pip_files pip download -r requirements.txt -------------------------------------------------------------------------- Disable cache: --no-cache-dir --------------------------------------------------------------------------

+ DateTime (Jan. 29, 2016, 11:58 a.m.)

from datetime import datetime, timedelta from django.utils.timezone import make_aware, get_current_timezone datetime.fromtimestamp(int(request.POST['date']) / 1000).date() datetime.fromtimestamp(int(timestamp)) datetime.fromtimestamp(int(timestamp)).date() ---------------------------------------------------------------------------- date_time = Call.objects.order_by('-id').first().date_time timestamp = int(date_time.strftime('%s')) datetime.fromtimestamp(timestamp) ---------------------------------------------------------------------------- - timedelta(hours=24) ---------------------------------------------------------------------------- now = dt_name = '%s-%s-%s--%s-%s' % (now.year, now.month,, now.hour, now.minute) ---------------------------------------------------------------------------- now = make_aware(, get_current_timezone()) current_hour = make_aware(datetime(now.year, now.month,, now.hour, 00, 00), get_current_timezone()) int((now - current_hour).seconds / 5) end_time = current_hour + timedelta(seconds=5) ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- Timestamp: import time time.mktime(mydate.timetuple()) ---------------------------------------------------------------------------- Difference between two dates: (appointment_date() - ---------------------------------------------------------------------------- Date string to date object: datetime.datetime.strptime('24052010', "%d%m%Y").date() ---------------------------------------------------------------------------- Iterate through two dates: start_date = end_date = start_date.replace(year=start_date.year + 1) for day_num in range((end_date - start_date).days + 1): date = start_date + timedelta(days=day_num) ---------------------------------------------------------------------------- from datetime import datetime dt = datetime(2017, 1, 1, 12, 30, 59, 0) ---------------------------------------------------------------------------- datetime.strptime('2014-12-04', '%Y-%m-%d').date() ---------------------------------------------------------------------------- Get string of Date or DateTime object: str( str( Get object from the string format: datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S.%f') In case of getting an error like "ValueError: unconverted data remains: +00:00": datetime.strptime(date_time_str.split('+')[0], '%Y-%m-%d %H:%M:%S.%f') ---------------------------------------------------------------------------- Subtract / Add to datetime: from datetime import datetime, timedelta d = - timedelta(days=days_to_subtract) start_dt - datetime.timedelta(hours=1) ---------------------------------------------------------------------------- import datetime selected_date = if request.POST: selected_date = datetime.datetime.strptime(request.POST['date'], '%Y-%m-%d').date() earlier_date = selected_date - datetime.timedelta(days=1) start_dt = datetime.datetime(earlier_date.year, earlier_date.month,, 23, 0, 0) end_dt = datetime.datetime(selected_date.year, selected_date.month,, 23, 59, 59) ---------------------------------------------------------------------------- Determine whether datetimes are aware or naive: from django.utils import timezone timezone.is_aware(dt_obj) timezone.is_naive(dt_obj) ---------------------------------------------------------------------------- def convert_to_tehran_dt(dt): local_tz = pytz.timezone('Asia/Tehran') local_dt = dt.replace(tzinfo=pytz.utc).astimezone(local_tz) # return local_tz.normalize(local_dt) ---------------------------------------------------------------------------- Get last Friday: pip install python-dateutil from datetime import datetime from dateutil.relativedelta import relativedelta, FR + relativedelta(weekday=FR(-1)) ---------------------------------------------------------------------------- Hours, Minutes, Seconds from total sum of integers: datetime.timedelta(seconds=total_sum_seconds)) OR datetime.timedelta(minutes=total_sum_minutes)) ---------------------------------------------------------------------------- Get Date/Time in only Hours, Minutes, Seconds: def get_duration(duration): hours = int(duration / 3600) minutes = int(duration % 3600 / 60) seconds = int((duration % 3600) % 60) return '{:02d}:{:02d}:{:02d}'.format(hours, minutes, seconds) print(get_duration(30512)) ---------------------------------------------------------------------------- Convert DateTime to string: now = year = now.strftime("%Y") month = now.strftime("%m") day = now.strftime("%d") time = now.strftime("%H:%M:%S") date_time = now.strftime("%m/%d/%Y, %H:%M:%S") ---------------------------------------------------------------------------- Using Format: d = datetime(2019, 5, 21) format(d, '%a, %b %d %m, %Y') >>> Tue, May 21 05, 2019' 'The time is {:%Y-%m-%d}'.format(d) 'The time is 2019-05-21' ---------------------------------------------------------------------------- time.tzname >>> ('+0330', '+0430') time.timezone >>> -12600 ---------------------------------------------------------------------------- Check if DST (Daylight Savings Tme) is in effect: if time.daylight: ---------------------------------------------------------------------------- tz = pytz.timezone('Asia/Tehran') local_time = ---------------------------------------------------------------------------- pytz.all_timezones ---------------------------------------------------------------------------- for x in sorted(pytz.country_names): print('{}: {}:'.format(x, pytz.country_names[x]), end=' ') if x in pytz.country_timezones: print(pytz.country_timezones[x]) else: print('No timezone defined.') ---------------------------------------------------------------------------- for x in sorted(pytz.country_names): print('{}: {}:'.format(x, pytz.country_names[x]), end=' ') if x in pytz.country_timezones: for zone in sorted(pytz.country_timezones[x]): tz = pytz.timezone(zone) local_time = print("\t\t{}: {}".format(zone, local_time)) else: print('\t\tNo timezone defined.') ---------------------------------------------------------------------------- local_time = utc_time = datetime.datetime.utcnow() aware_local_time = pytz.utc.localize(local_time) aware_utc_time = pytz.utc.localize(utc_time) print(aware_local_time, aware_utc_time) print(aware_utc_time.tzinfo) aware_local_time = pytz.utc.localize(utc_time).astimezone() print(aware_local_time) ---------------------------------------------------------------------------- Get hour_minute_seconds: '{:%H_%M_%S}'.format( Get string from date-time object: '{:%Y %m %d %H:%M:%S}'.format( ----------------------------------------------------------------------------

+ Accessing index in loops (July 29, 2015, 12:32 p.m.)

names = ['Mohsen', 'Hadi', 'Farhad'] for index, name in enumerate(names): print index