Hello World x86_64 assembler

In this note I will describe the process of setting up the IDE, writing the first Hello World in x86_64 assembler for the Ubuntu Linux operating system.
Let’s start with installing the SASM IDE, nasm assembler:

Далее запустим SASM и напишем Hello World:


section .text

main:
    mov rbp, rsp      ; for correct debugging
    mov rax, 1        ; write(
    mov rdi, 1        ;   STDOUT_FILENO,
    mov rsi, msg      ;   "Hello, world!\n",
    mov rdx, msglen   ;   sizeof("Hello, world!\n")
    syscall           ; );

    mov rax, 60       ; exit(
    mov rdi, 0        ;   EXIT_SUCCESS
    syscall           ; );

section .rodata
    msg: db "Hello, world!"
    msglen: equ $-msg

The Hello World code is taken from James Fisher's blog, adapted for assembly and debugging in SASM. The SASM documentation states that the entry point must be a function named main, otherwise debugging and compilation of the code will be incorrect.
What did we do in this code? We made a syscall call - an appeal to the Linux operating system kernel with the correct arguments in the registers, a pointer to a string in the data section.

Under the magnifying glass

Let's look at the code in more detail:

global – директива ассемблера позволяющая задавать глобальные символы со строковыми именами. Хорошая аналогия – интерфейсы заголовочных файлов языков C/C++. В данном случае мы задаем символ main для функции входа.

section – директива ассемблера позволяющая задавать секции (сегменты) кода. Директивы section или segment равнозначны. В секции .text помещается код программы.

Обьявляем начало функции main. В ассемблере функции называются подпрограммами (subroutine)

Первая машинная команда mov – помещает значение из аргумента 1 в аргумент 2. В данном случае мы переносим значение регистра rbp в rsp. Из комментария можно понять что эту строку добавил SASM для упрощения отладки. Видимо это личные дела между SASM и дебаггером gdb.

Далее посмотрим на код до сегмента данных .rodata, два вызова syscall, первый выводит строку Hello World, второй обеспечивает выход из приложения с корректным кодом 0.

Представим себе что регистры это переменные с именами rax, rdi, rsi, rdx, r10, r8, r9. По аналогии с высокоуровневыми языками, перевернем вертикальное представление ассемблера в горизонтальное, тогда вызов syscall будет выглядеть так:

Тогда вызов печати текста:

Вызов exit с корректным кодом 0:

Рассмотрим аргументы подробнее, в заголовочном файле asm/unistd_64.h находим номер функции __NR_write – 1, далее в документации смотрим аргументы для write:
ssize_t write(int fd, const void *buf, size_t count);

Первый аргумент – файловый дескриптор, второй – буфер с данными, третий – счетчик байт для записи в дескриптор. Ищем номер файлового дескриптора для стандартного вывода, в мануале по stdout находим код 1. Далее дело за малым, передать указатель на буфер строки Hello World из секции данных .rodata – msg, счетчик байт – msglen, передать в регистры rax, rdi, rsi, rdx корректные аргументы и вызвать syscall.

Обозначение константных строк и длины описывается в мануале nasm:

Hash table

A hash table allows you to implement an associative array (dictionary) data structure, with an average performance of O(1) for insert, delete, and search operations.

Below is an example of the simplest implementation of a hash map on nodeJS:

How does it work? Let’s watch the hands:

  • There is an array inside the hash map
  • Inside the array element is a pointer to the first node of the linked list
  • Memory is allocated for an array of pointers (for example 65535 elements)
  • Implement a hash function, the input is a dictionary key, and the output can do anything, but in the end it returns the index of the array element

How does the recording work:

  • The input is a key – value pair
  • The hash function returns an index by key
  • Get a linked list node from an array by index
  • We check if it matches the key
  • If it matches, then replace the value
  • If it doesn’t match, then we move on to the next node until we either find a node with the required key.
  • If the node is not found, then we create it at the end of the linked list

How does keyword search work:

  • The input is a key – value pair
  • The hash function returns an index by key
  • Get a linked list node from an array by index
  • We check if it matches the key
  • If it matches, then return the value
  • If it doesn’t match, then we move on to the next node until we either find a node with the required key.

Why do we need a linked list inside an array? Because of possible collisions when calculating the hash function. In this case, several different key-value pairs will be located at the same index in the array, in which case the linked list is traversed to find the required key.

Sources

https://ru.wikipedia.org/wiki/Hash table
https://www.youtube.com/watch?v=wg8hZxMRwcw

Source code

https://gitlab.com/demensdeum/datastructures

Working with Resources in Android C++

There are several options for working with resources in Android via ndk – C++:

  1. Use access to resources from an apk file using AssetManager
  2. Download resources from the Internet and unpack them into the application directory, use them using standard C++ methods
  3. Combined method – get access to the archive with resources in apk via AssetManager, unpack them into the application directory, then use them using standard C++ methods

Next I will describe the combined access method used in the Flame Steel Engine.
When using SDL, you can simplify access to resources from apk, the library wraps calls to AssetManager, offering interfaces similar to stdio (fopen, fread, fclose, etc.)

SDL_RWops *io = SDL_RWFromFile("files.fschest", "r");

After loading the archive from apk to the buffer, you need to change the current working directory to the application directory, it is available for the application without obtaining additional permissions. To do this, we will use a wrapper on SDL:

chdir(SDL_AndroidGetInternalStoragePath());

Next, we write the archive from the buffer to the current working directory using fopen, fwrite, fclose. After the archive is in a directory accessible to C++, we unpack it. Zip archives can be unpacked using a combination of two libraries – minizip and zlib, the first one can work with the archive structure, while the second one unpacks the data.
For more control, ease of porting, I have implemented my own zero-compression archive format called FSChest (Flame Steel Chest). This format supports archiving a directory with files, and unpacking; There is no support for folder hierarchy, only files can be worked with.
We connect the FSChest library header, unpack the archive:

#include "fschest.h" 
FSCHEST_extractChestToDirectory(archivePath, SDL_AndroidGetInternalStoragePath()); 

After unpacking, the C/C++ interfaces will have access to the files from the archive. Thus, I did not have to rewrite all the work with files in the engine, but only add file unpacking at the startup stage.

Sources

https://developer.android.com/ndk/ reference/group/asset

Source Code

https://gitlab.com/demensdeum/space- jaguar-action-rpg
https://gitlab.com/demensdeum/fschest

Stack Machine and RPN

Let’s say we need to implement a simple bytecode interpreter. What approach should we choose to implement this task?

The Stack data structure provides the ability to implement the simplest bytecode machine. Features and implementations of stack machines are described in many articles on the Western and domestic Internet, I will only mention that the Java virtual machine is an example of a stack machine.

The principle of the machine is simple: a program containing data and operation codes (opcodes) is fed to the input, and the necessary operations are implemented using stack manipulations. Let’s look at an example of a bytecode program for my stack machine:

пMVkcatS olleHП
 

The output will be the string “Hello StackVM”. The stack machine reads the program from left to right, loading the data into the stack symbol by symbol, and when the opcode appears in the – symbol, it implements the command using the stack.

Example of stack machine implementation on nodejs:

Reverse Polish Notation (RPN)

Stack machines are also easy to use for implementing calculators, using Reverse Polish Notation (postfix notation).
Example of a normal infix notation:
2*2+3*4

Converts to RPN:
22*34*+

To calculate the postfix notation we use a stack machine:
2 – to top of stack (stack: 2)
2 – to top of stack (stack: 2,2)
* – get the top of the stack twice, multiply the result, push to the top of the stack (stack: 4)
3 – to top of stack (stack: 4, 3)
4 – on top of stack (stack: 4, 3, 4)
* – get the top of the stack twice, multiply the result, push to the top of the stack (stack: 4, 12)
+ – get the top of the stack twice, add the result, push to the top of the stack (stack: 16)

As you can see, the result of operations 16 remains on the stack, it can be printed by implementing stack printing opcodes, for example:
p22*34*+P

П – opcode to start printing the stack, п – opcode to finish printing the stack and sending the final line for rendering.
To convert arithmetic operations from infix to postfix, Edsger Dijkstra’s algorithm called “Sorting Yard” is used. An example of the implementation can be seen above, or in the repository of the stack machine project on nodejs below.

Sources

https://tech.badoo.com/ru/article/579/interpretatory-bajt-kodov-svoimi-rukami/
https://ru.wikipedia.org/wiki/Обратная_польская_запись

Source code

https://gitlab.com/demensdeum/stackvm/< /p>

Skeletal Animation (Part 2 – Node Hierarchy, Interpolation)

I continue to describe the skeletal animation algorithm as it is implemented in the Flame Steel Engine.

Since this is the most complex algorithm I’ve ever implemented, there may be errors in the development notes. In the previous article about this algorithm, I made a mistake: the bone array is passed to the shader for each mesh separately, not for the entire model.

Hierarchy of nodes

For the algorithm to work correctly, the model must contain a connection between the bones (graph). Let’s imagine a situation in which two animations are played simultaneously – a jump and raising the right hand. The jump animation must raise the model along the Y axis, while the animation of raising the hand must take this into account and rise together with the model in the jump, otherwise the hand will remain in place on its own.

Let’s describe the node connection for this case – the body contains a hand. When the algorithm is processed, the bone graph will be read, all animations will be taken into account with correct connections. In the model’s memory, the graph is stored separately from all animations, only to reflect the connectivity of the model’s bones.

Interpolation on CPU

In the previous article I described the principle of rendering skeletal animation – “transformation matrices are passed from the CPU to the shader at each rendering frame.”

Each rendering frame is processed on the CPU, for each bone of the mesh the engine gets the final transformation matrix using interpolation of position, rotation, magnification. During the interpolation of the final bone matrix, the node tree is traversed for all active node animations, the final matrix is ​​multiplied with the parents, then sent to the vertex shader for rendering.

Vectors are used for position interpolation and magnification, quaternions are used for rotation, since they are very easy to interpolate (SLERP) unlike Euler angles, and they are also very easy to represent as a transformation matrix.

How to Simplify Implementation

To simplify debugging of vertex shader operation, I added simulation of vertex shader operation on CPU using macro FSGLOGLNEWAGERENDERER_CPU_BASED_VERTEX_MODS_ENABLED. Video card manufacturer NVIDIA has a utility for debugging shader code Nsight, perhaps it can also simplify development of complex algorithms of vertex/pixel shaders, however I never had a chance to check its functionality, simulation on CPU was enough.

In the next article I plan to describe mixing several animations, fill in the remaining gaps.

Sources

https://www.youtube.com/watch?v= f3Cr8Yx3GGA

Adding JavaScript Scripting Support to C++

In this note I will describe a way to add support for JavaScript scripts to a C++ application using the Tiny-JS library.

Tiny-JS is a library for embedding in C++, providing execution of JavaScript code, with support for bindings (the ability to call C++ code from scripts)

At first I wanted to use popular libraries ChaiScript, Duktape or connect Lua, but due to dependencies and possible difficulties in portability to different platforms, it was decided to find a simple, minimal, but powerful MIT JS lib, Tiny-JS meets these criteria. The only downside of this library is the lack of support/development by the author, but its code is simple enough that you can take on the support yourself if necessary.

Download Tiny-JS from the repository:
https://github.com/gfwilliams/tiny-js

Next, add Tiny-JS headers to the code that is responsible for scripts:

#include "tiny-js/TinyJS.h"
#include "tiny-js/TinyJS_Functions.h"

Add TinyJS .cpp files to the build stage, then you can start writing scripts to load and run.

An example of using the library is available in the repository:
https://github.com/gfwilliams/tiny-js/blob/master/Script.cpp
https://github.com/gfwilliams/tiny-js/blob/wiki/CodeExamples.md

An example of the implementation of the handler class can be found in the SpaceJaguar project:
https://gitlab.com/demensdeum/space-jaguar-action-rpg/-/blob/master/project/src/Controllers/SpaceJaguarScriptController/SpaceJaguarScriptController.h
https://gitlab.com/demensdeum/space-jaguar-action-rpg/-/blob/master/project/src/Controllers/SpaceJaguarScriptController/SpaceJaguarScriptController.cpp

Example of a game script added to the application:
https://gitlab.com/demensdeum/space-jaguar-action-rpg/-/blob/master/project/resources/com.demensdeum.spacejaguaractionrpg.scripts.sceneController.js

Sources

https://github.com/gfwilliams/tiny-js
https://github.com/dbohdan/embedded-scripting-languages
https://github.com/AlexKotik/embeddable-scripting-languages

Building C++ SDL iOS App on Linux

In this note I will describe the procedure for building a C++ SDL application for iOS on Linux, signing an ipa archive without a paid Apple Developer subscription and installing it on a clean device (iPad) using macOS without Jailbreak.

First, let’s install the toolchain build for Linux:
https://github.com/tpoechtrager/cctools-port

The toolchain needs to be downloaded from the repository, then follow the instructions on the Godot Engine website to complete the installation:
https://docs.godotengine.org/ru/latest/development/compiling/cross-compiling_for_ios_on_linux.html

At the moment, you need to download Xcode dmg and copy the sdk from there to build cctools-port. This step is easier to complete on macOS, it is enough to copy the necessary sdk files from the installed Xcode. After successful assembly, the terminal will contain the path to the cross-compiler toolchain.

Next, you can start building the SDL application for iOS. Open cmake and add the necessary changes to build the C++ code:

SET(CMAKE_SYSTEM_NAME Darwin)
SET(CMAKE_C_COMPILER arm-apple-darwin11-clang)
SET(CMAKE_CXX_COMPILER arm-apple-darwin11-clang++)
SET(CMAKE_LINKER arm-apple-darwin11-ld)

Now you can build using cmake and make, but don’t forget to set $PATH to the cross-compiler toolchain:


PATH=$PATH:~/Sources/cctools-port/usage_examples/ios_toolchain/target/bin

For correct linking with frameworks and SDL, we register them in cmake, dependencies of the Space Jaguar game for example:


target_link_libraries(
${FSEGT_PROJECT_NAME}
${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/libclang_rt.ios.a
${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/libSDL2.a
${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/libSDL2_mixer.a
${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/libSDL2_image.a
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/CoreServices.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/ImageIO.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/Metal.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/AVFoundation.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/GameController.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/CoreMotion.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/CoreGraphics.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/AudioToolbox.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/CoreAudio.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/QuartzCore.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/OpenGLES.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/UIKit.framework"
"${FLAME_STEEL_PROJECT_ROOT_DIRECTORY}/scripts/buildScripts/ios/resources/libs/Foundation.framework"
)

In my case, the SDL, SDL_Image, SDL_mixer libraries are compiled in Xcode on macOS in advance for static linking; Frameworks are copied from Xcode. Also added is the libclang_rt.ios.a library, which includes specific iOS runtime calls, such as isOSVersionAtLeast. Enabled macro for working with OpenGL ES, disabling unsupported functions in the mobile version, similar to Android.

After solving all the build issues, you should get a built binary for arm. Next, let’s look at running the built binary on a device without Jailbreak.

On macOS, install Xcode, register on the Apple portal, without paying for the developer program. Add an account in Xcode -> Preferences -> Accounts, create an empty application and build on a real device. During the build, the device will be added to the free developer account. After building and running, you need to build the archive, for this, select Generic iOS Device and Product -> Archive. After the archive is built, extract the embedded.mobileprovision, PkgInfo files from it. From the build log on the device, find the codesign line with the correct signature key, the path to the entitlements file with the app.xcent extension, copy it.

Copy the .app folder from the archive, replace the binary in the archive with the one compiled by a cross compiler in Linux (for example, SpaceJaguar.app/SpaceJaguar), then add the necessary resources to .app, check the safety of the PkgInfo and embedded.mobileprovision files in .app from the archive, copy again if necessary. Re-sign .app using the codesign command – codesign requires a key for sign, the path to the entitlements file (can be renamed with the .plist extension)

After re-signing, create a Payload folder, move the folder with the .app extension there, create a zip archive with Payload in the root, rename the archive with the .ipa extension. After that, open the list of devices in Xcode and Drag’n’Drop the new ipa to the list of device applications; Installation via Apple Configurator 2 does not work for this method. If re-signing is done correctly, the application with the new binary will be installed on the iOS device (for example, iPad) with a 7-day certificate, this is enough for the testing period.

Sources

https://github.com/tpoechtrager/cctools-port
https://docs.godotengine.org/ru/latest/development/compiling/cross-compiling_for_ios_on_linux.html
https://jonnyzzz.com/blog/2018/06/13/link-error-3/
https://stackoverflow.com/questions/6896029/re-sign-ipa-iphone
https://developer.apple.com/library/archive/documentation/Security/Conceptual/CodeSigningGuide/Procedures/Procedures.html

Fixing Slow HDD Performance in Windows 10

This note is dedicated to all hard drive users who are not giving up their positions.


Original (Mae Mu)

After 1.5 years of using the HP Pavilion laptop in dualboot HDD (Windows 10) and SSD (Ubuntu), I began to notice very long loading of applications, general unresponsiveness of the interface, freezing on the simplest operations in Windows 10. The problem was minimized to the extent that it became possible to use the laptop again. Below I will describe the actions that I took to eliminate the problem.

Diagnostics

To begin the study, we need to eliminate any kind of mystification, first we will determine the main reasons for hard drive failures. What can go wrong when working with a hard drive? Problems can arise at the physical level of electronics and at the logical, software level of data.
Electronics problems include things like: a non-working computer/laptop power supply, problems with the laptop battery; wear and tear of hard drive components, problems with the circuits and chips of the internal components of the drive, software errors in the firmware, consequences of impacts/falls of the drive, or similar problems with other devices that affect its operation.
Critical wear of a hard drive is considered to be the moment when a number of bad sectors (bad block) appears, at which further operation of the drive is impossible. These blocks are blocked by the firmware of the hard drive, the data is transferred to other sectors automatically and should not affect the operation of the drive until a certain critical moment.
Software logic problems include errors in the file system due to incorrect operation of applications, user actions: hot-switching off the device, terminating recording processes without correctly stopping applications, errors in drivers, operating system services.
Without specialized electronic diagnostic tools, we can only check the correctness of the software level, electronic malfunctions may be detected in the process, which are usually eliminated by block repair (replacement of components/chips); Next, we will consider software diagnostic methods using diagnostic utilities. It is worth noting that all utilities should be launched on the system with maximum priority, since other applications can interfere with performance measurements, block the disk for reading/writing, which will lead to incorrect diagnostic results.

SMART

S.M.A.R.T. is a system for monitoring the state of data storage devices – HDD, SDD, eMMC, etc. Allows you to assess the wear of the device, view the number of bad blocks (bad block), and take further action based on the data. You can view SMART in different applications for working with disks, I prefer to use utilities from the manufacturer. For my Seagate hard drive, I used the SeaTools utility, for it the state was displayed as GOOD, that is, the disk firmware thinks that everything is fine.

Manufacturer Utilities

The disk manufacturer’s utilities provide tests to check its operation. SeaTools has several types of tests, you can use all of them to localize the problem. Quick and simple tests may not reveal any problems, so prefer long tests. In my case, only Long Test found errors.

Slowride

To check the correctness of reading, finding slow or dead blocks, I wrote the application slowride, it works on a very simple principle – it opens a block device descriptor, with the specified user settings, reads data from the entire device, with time measurements, and displays slow blocks. The program stops at the first error, in which case you will have to move on to more serious utilities for reading data, since it is not possible to read disk data using simple methods.
In my case, the entire disk was read correctly, with a small drop in speed – 90 MB/sec (5400 rpm) per second, in some areas of the disk. From which it could be concluded that I was dealing with a software problem.

Acoustic Analysis

This method does not apply to software diagnostic methods, but is quite important for troubleshooting. For example, with a partially working power supply, the hard drive may hang/freeze, making a loud click.
In my case, when working with a disk in Windows 10, I heard a loud crack familiar to all HDD owners of the disk head running back and forth when trying to do something in the operating system, but the sound was almost constant, this led me to think about too much disk fragmentation, disk overload with background services.

Fixing

No electronics problems were detected during software diagnostics, block-by-block reading of the entire disk was completed correctly, however SeaTools showed errors during the Long Test check.

Manufacturer Utilities

The disk manufacturer’s software, in addition to diagnostics, provides error correction procedures. In SeaTools, the Fix All button is responsible for this; after confirming your consent to potential data loss, the correction process will start. Did this fix help in my case? No, the disk continued to work just as loudly and slowly, but Long Test no longer showed errors.

CHKDSK

CHKSDK is a Microsoft utility for eliminating software errors for Windows file systems. Over time, such errors accumulate on the disk and can greatly interfere with work, including the inability to read / write any data at all. You can find instructions for using the utility on the Microsoft website, but I recommend using all possible flags to fix errors (at the time of writing this note, this is /r /b /f); You need to run the check with administrator rights through the Windows terminal (cmd), for the system partition it will take place during system boot, it can take a very long time, in my case it took 12 hours.
Did this fix help in my case? No.

Disk defragmentation

Working with data on the disk is carried out in blocks, large files are usually written in several blocks/fragments. Over time, many deleted files create empty blocks that are not adjacent, because of this, when writing files fill these voids, and the disk head has to physically overcome large distances. This problem is called fragmentation, and only hard drive users encounter it. At the time of several fixes, fragmentation of my hard drive was 41%, visually it looked like this:

That is, everything is bad. You can see fragmentation and defragment using the Defragger utility or the built-in defragmenter. You can also enable the “Optimize drives” service in Windows 10, set up defragmentation on a schedule in the control panel. Only HDD disks need defragmentation, it is not advisable to enable it for SSD disks, since this will lead to accelerated wear of the disk, apparently for this reason background defragmentation is disabled by default.

An alternative defragmentation option is also known – moving data to another disk, formatting the disk, and copying the data back. In this case, the data will be written to completely empty sectors, while maintaining the correct logical structure for the system to operate. This option is fraught with problems of resetting potentially critical metadata, which may not be moved during normal copying.

Disabling services

With the help of Mark Russinovich’s utility Process Monitor you can track the processes that load the hard drive with their business, it is enough to enable the IO Write/Read columns. After examining this column, I disabled the Xbox Game Bar service, the well-known background acceleration service Superfetch under the new name SysMain, through the services panel of the control panel. Superfetch should constantly analyze the applications that the user uses and speed up their launch by caching in RAM, in my case this led to background loading of the entire disk and the impossibility of work.

Cleaning the disk

I also removed old applications, unnecessary files, thus freeing up sectors for correct fragmentation, simplifying the work of the operating system, reducing the number of useless, heavy services and programs.

Result

What helped the most? A noticeable difference in performance was achieved after defragmenting the disk, spontaneous freezes were eliminated by disabling the Xbox and Superfetch services. Would these problems not have occurred if I had used an SSD? There would definitely have been no problems with slow operation due to fragmentation, problems with services would have had to be eliminated in any case, and software errors do not depend on the type of drive. In the near future, I plan to completely switch to SSD, but for now, “Long live pancakes, pancakes forever!”

Links

http://www.outsidethebox.ms/why-windows-8-defragments-your-ssd-and-how-you-can-avoid-this/
https://channel9.msdn.com/Shows/The-Defrag-Show
https://www.seagate.com/ru/ru/support/downloads/seatools/
https://www.ccleaner.com/defraggler/download
https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/chkdsk
https://gitlab.com/demensdeum/slowride/