ollama-call

If you use Ollama and don’t want to write your own API wrapper every time,
the ollama_call project significantly simplifies the work.

This is a small Python library that allows you to send a request to a local LLM with one function
and immediately receive a response, including in JSON format.

Installation

pip install ollama-call

Why is it needed

  • minimal code for working with the model;
  • structured JSON response for further processing;
  • convenient for rapid prototypes and MVPs;
  • supports streaming output if necessary.

Use example

from ollama_call import ollama_call

response = ollama_call(
    user_prompt="Hello, how are you?",
    format="json",
    model="gemma3:12b"
)

print(response)

When it is especially useful

  • you write scripts or services on top of Ollama;
  • need a predictable response format;
  • there is no desire to connect heavy frameworks.

Total

ollama_call is a lightweight and clear wrapper for working with Ollama from Python.
A good choice if simplicity and quick results are important.

GitHub
https://github.com/demensdeum/ollama_call

SFAP: a modular framework for modern data acquisition and processing

In the context of the active development of automation and artificial intelligence, the task of effectively collecting,
Cleaning and transforming data becomes critical. Most solutions only close
separate stages of this process, requiring complex integration and support.

SFAP (Seek · Filter · Adapt · Publish) is an open-source project in Python,
which offers a holistic and extensible approach to processing data at all stages of its lifecycle:
from searching for sources to publishing the finished result.

What is SFAP

SFAP is an asynchronous framework built around a clear concept of a data processing pipeline.
Each stage is logically separate and can be independently expanded or replaced.

The project is based on the Chain of Responsibility architectural pattern, which provides:

  • pipeline configuration flexibility;
  • simple testing of individual stages;
  • scalability for high loads;
  • clean separation of responsibilities between components.

Main stages of the pipeline

Seek – data search

At this stage, data sources are discovered: web pages, APIs, file storages
or other information flows. SFAP makes it easy to connect new sources without changing
the rest of the system.

Filter – filtering

Filtering is designed to remove noise: irrelevant content, duplicates, technical elements
and low quality data. This is critical for subsequent processing steps.

Adapt – adaptation and processing

The adaptation stage is responsible for data transformation: normalization, structuring,
semantic processing and integration with AI models (including generative ones).

Publish – publication

At the final stage, the data is published in the target format: databases, APIs, files, external services
or content platforms. SFAP does not limit how the result is delivered.

Key features of the project

  • Asynchronous architecture based on asyncio
  • Modularity and extensibility
  • Support for complex processing pipelines
  • Ready for integration with AI/LLM solutions
  • Suitable for highly loaded systems

Practical use cases

  • Aggregation and analysis of news sources
  • Preparing datasets for machine learning
  • Automated content pipeline
  • Cleansing and normalizing large data streams
  • Integration of data from heterogeneous sources

Getting started with SFAP

All you need to get started is:

  1. Clone the project repository;
  2. Install Python dependencies;
  3. Define your own pipeline steps;
  4. Start an asynchronous data processing process.

The project is easily adapted to specific business tasks and can grow with the system,
without turning into a monolith.

Conclusion

SFAP is not just a parser or data collector, but a full-fledged framework for building
modern data-pipeline systems. It is suitable for developers and teams who care about
scalable, architecturally clean, and data-ready.
The project source code is available on GitHub:
https://github.com/demensdeum/SFAP

Why can’t I fix the bug?

You spend hours working on the code, going through hypotheses, adjusting the conditions, but the bug is still reproduced. Sound familiar? This state of frustration is often called “ghost hunting.” The program seems to live its own life, ignoring your corrections.

One of the most common – and most annoying – reasons for this situation is looking for an error in completely the wrong place in the application.

The trap of “false symptoms”

When we see an error, our attention is drawn to the place where it “shot”. But in complex systems, where a bug occurs (crash or incorrect value) is only the end of a long chain of events. When you try to fix the ending, you are fighting the symptoms, not the disease.

This is where the flowchart concept comes in.

How it works in reality

Of course, it is not necessary to directly draw (draw) a flowchart on paper every time, but it is important to have it in your head or at hand as an architectural guide. A flowchart allows you to visualize the operation of an application as a tree of outcomes.

Without understanding this structure, the developer is often groping in the dark. Imagine the situation: you edit the logic in one condition branch, while the application (due to a certain set of parameters) goes to a completely different branch that you didn’t even think about.

Result: You spend hours on a “perfect” code fix in one part of the algorithm, which, of course, does nothing to fix the problem in another part of the algorithm where it actually fails.


Algorithm for defeating a bug

To stop beating on a closed door, you need to change your approach to diagnosis:

  • Find the state in the outcome tree:Before writing code, you need to determine exactly the path that the application has taken. At what point did logic take a wrong turn? What specific state (State) led to the problem?
  • Reproduction is 80% of success: This is usually done by testers and automated tests. If the bug is “floating”, development is involved in the process to jointly search for conditions.
  • Use as much information as possible: Logs, OS version, device parameters, connection type (Wi-Fi/5G) and even a specific telecom operator are important for localization.

“Photograph” of the moment of error

Ideally, to fix it, you need to get the full state of the application at the time the bug was reproduced. Interaction logs are also critically important: they show not only the final point, but also the entire user path (what actions preceded the failure). This helps to understand how to recreate a similar state again.

Future tip: If you encounter a complex case, add extended debug logging information to this section of code in case the situation happens again.


The problem of “elusive” states in the era of AI

In modern systems using LLM (Large Language Models), classical determinism (“one input, one output”) is often violated. You can pass exactly the same input data, but get a different result.

This happens due to the non-determinism of modern production systems:

  • GPU Parallelism: GPU floating point operations are not always associative. Due to parallel execution of threads, the order in which numbers are added may change slightly, which may affect the result.
  • GPU temperature and throttling: Execution speed and load distribution may depend on the physical state of the hardware. In huge models, these microscopic differences accumulate and can lead to the selection of a different token at the output.
  • Dynamic batching: In the cloud, your request is combined with others. Different batch sizes change the mathematics of calculations in the kernels.

Under such conditions, it becomes almost impossible to reproduce “that same state”. Only a statistical approach to testing can save you here.


When logic fails: Memory problems

If you are working with “unsafe” languages ​​(C or C++), the bug may occur due to Memory Corruption.

These are the most severe cases: an error in one module can “overwrite” data in another. This leads to completely inexplicable and isolated failures that cannot be traced using normal application logic.

How to protect yourself at the architectural level?

To avoid such “mystical” bugs, you should use modern approaches:

  • Multithreaded programming patterns:Clear synchronization eliminates race conditions.
  • Thread-safe languages: Tools that guarantee memory safety at compile time:
    • Rust: Ownership system eliminates memory errors.
    • Swift 6 Concurrency:Strong data isolation checks.
    • Erlang: Complete process isolation through the actor model.

Summary

Fixing a bug is not about writing new code, but about understanding how the old one works. Remember: you could be wasting time editing a branch that management doesn’t even touch. Record the state of the system, take into account the factor of AI non-determinism and choose safe tools.

Block diagrams in practice without formalin

The block diagram is a visual tool that helps to turn a complex algorithm into an understandable and structured sequence of actions. From programming to business process management, they serve as a universal language for visualization, analysis and optimization of the most complex systems.

Imagine a map where instead of roads is logic, and instead of cities – actions. This is a block diagram-an indispensable tool for navigation in the most confusing processes.

Example 1: Simplified game launching scheme
To understand the principle of work, let’s present a simple game launch scheme.

This scheme shows the perfect script when everything happens without failures. But in real life, everything is much more complicated.

Example 2: Expanded scheme for starting the game with data loading
Modern games often require Internet connection to download user data, saving or settings. Let’s add these steps to our scheme.

This scheme is already more realistic, but what will happen if something goes wrong?

How was it: a game that “broke” with the loss of the Internet

At the start of the project, developers could not take into account all possible scenarios. For example, they focused on the main logic of the game and did not think what would happen if the player has an Internet connection.

In such a situation, the block diagram of their code would look like this:

In this case, instead of issuing an error or closing correctly, the game froze at the stage of waiting for data that she did not receive due to the lack of a connection. This led to the “black screen” and freezing the application.

How it became: Correction on user complaints

After numerous users’ complaints about hovering, the developer team realized that we needed to correct the error. They made changes to the code by adding an error processing unit that allows the application to respond correctly to the lack of connection.

This is what the corrected block diagram looks like, where both scenarios are taken into account:

Thanks to this approach, the game now correctly informs the user about the problem, and in some cases it can even go to offline mode, allowing you to continue the game. This is a good example of why block diagrams are so important : they make the developer think not only about the ideal way of execution, but also about all possible failures, making the final product much more stable and reliable.

Uncertain behavior

Hanging and errors are just one examples of unpredictable behavior of the program. In programming, there is a concept of uncertain behavior (undefined behavior) – this is a situation where the standard of the language does not describe how the program should behave in a certain case.

This can lead to anything: from random “garbage” in the withdrawal to the failure of the program or even serious security vulnerability. Indefinite behavior often occurs when working with memory, for example, with lines in the language of C.

An example from the language C:

Imagine that the developer copied the line into the buffer, but forgot to add to the end the zero symbol (`\ 0`) , which marks the end of the line.

This is what the code looks like:

#include 

int main() {
char buffer[5];
char* my_string = "hello";

memcpy(buffer, my_string, 5);

printf("%s\n", buffer);
return 0;
}

Expected result: “Hello”
The real result is unpredictable.

Why is this happening? The `Printf` function with the specifier`%S` expects that the line ends with a zero symbol. If he is not, she will continue to read the memory outside the highlighted buffer.

Here is the block diagram of this process with two possible outcomes:

This is a clear example of why the block diagrams are so important: they make the developer think not only about the ideal way of execution, but also about all possible failures, including such low-level problems, making the final product much more stable and reliable.

Pixel Perfect: myth or reality in the era of declarativeness?

In the world of interfaces development, there is a common concept – “Pixel Perfect in the Lodge” . It implies the most accurate reproduction of the design machine to the smallest pixel. For a long time it was a gold standard, especially in the era of a classic web design. However, with the arrival of the declarative mile and the rapid growth of the variety of devices, the principle of “Pixel Perfect” is becoming more ephemeral. Let’s try to figure out why.

Imperial Wysiwyg vs. Declarative code: What is the difference?

Traditionally, many interfaces, especially desktop, were created using imperative approaches or Wysiwyg (What You See is What You Get) of editors. In such tools, the designer or developer directly manipulates with elements, placing them on canvas with an accuracy to the pixel. It is similar to working with a graphic editor – you see how your element looks, and you can definitely position it. In this case, the achievement of “Pixel Perfect” was a very real goal.

However, modern development is increasingly based on declarative miles . This means that you do not tell the computer to “put this button here”, but describe what you want to get. For example, instead of indicating the specific coordinates of the element, you describe its properties: “This button should be red, have 16px indentations from all sides and be in the center of the container.” Freimvorki like React, Vue, Swiftui or Jetpack Compose just use this principle.

Why “Pixel Perfect” does not work with a declarative mile for many devices

Imagine that you create an application that should look equally good on the iPhone 15 Pro Max, Samsung Galaxy Fold, iPad Pro and a 4K resolution. Each of these devices has different screen resolution, pixel density, parties and physical sizes.

When you use the declarative approach, the system itself decides how to display your described interface on a particular device, taking into account all its parameters. You set the rules and dependencies, not harsh coordinates.

* Adaptability and responsiveness: The main goal of the declarative miles is to create adaptive and responsive interfaces . This means that your interface should automatically adapt to the size and orientation of the screen without breaking and maintaining readability. If we sought to “Pixel Perfect” on each device, we would have to create countless options for the same interface, which will completely level the advantages of the declarative approach.
* Pixel density (DPI/PPI): The devices have different pixel density. The same element with the size of 100 “virtual” pixels on a device with high density will look much smaller than on a low -density device, if you do not take into account the scaling. Declarative frameworks are abstracted by physical pixels, working with logical units.
* Dynamic content: Content in modern applications is often dynamic – its volume and structure may vary. If we tattered hard to the pixels, any change in text or image would lead to the “collapse” of the layout.
* Various platforms: In addition to the variety of devices, there are different operating systems (iOS, Android, Web, Desktop). Each platform has its own design, standard controls and fonts. An attempt to make an absolutely identical, Pixel Perfect interface on all platforms would lead to an unnatural type and poor user experience.

The old approaches did not go away, but evolved

It is important to understand that the approach to interfaces is not a binary choice between “imperative” and “declarative”. Historically, for each platform there were its own tools and approaches to the creation of interfaces.

* Native interface files: for iOS it were XIB/Storyboards, for Android-XML marking files. These files are a Pixel-PERFECT WYSIWYG layout, which is then displayed in the radio as in the editor. This approach has not disappeared anywhere, it continues to develop, integrating with modern declarative frames. For example, Swiftui in Apple and Jetpack Compose in Android set off on the path of a purely declarative code, but at the same time retained the opportunity to use a classic layout.
* hybrid solutions: Often in real projects, a combination of approaches is used. For example, the basic structure of the application can be implemented declaratively, and for specific, requiring accurate positioning of elements, lower -level, imperative methods can be used or native components developed taking into account the specifics of the platform.

from monolith to adaptability: how the evolution of devices formed a declarative mile

The world of digital interfaces has undergone tremendous changes over the past decades. From stationary computers with fixed permits, we came to the era of exponential growth of the variety of user devices . Today, our applications should work equally well on:

* smartphones of all form factors and screen sizes.
* tablets with their unique orientation modes and a separated screen.
* laptops and desktops with various permits of monitors.
* TVs and media centers , controlled remotely. It is noteworthy that even for TVs, the remarks of which can be simple as Apple TV Remote with a minimum of buttons, or vice versa, overloaded with many functions, modern requirements for interfaces are such that the code should not require specific adaptation for these input features. The interface should work “as if by itself”, without an additional description of what “how” to interact with a specific remote control.
* smart watches and wearable devices with minimalistic screens.
* Virtual reality helmets (VR) , requiring a completely new approach to a spatial interface.
* Augmented reality devices (AR) , applying information on the real world.
* automobile information and entertainment systems .
* And even household appliances : from refrigerators with sensory screens and washing machines with interactive displays to smart ovens and systems of the Smart House.

Each of these devices has its own unique features: physical dimensions, parties ratio, pixel density, input methods (touch screen, mouse, controllers, gestures, vocal commands) and, importantly, the subtleties of the user environment . For example, a VR shlesh requires deep immersion, and a smartphone-fast and intuitive work on the go, while the refrigerator interface should be as simple and large for quick navigation.

Classic approach: The burden of supporting individual interfaces

In the era of the dominance of desktops and the first mobile devices, the usual business was the creation and support of of individual interface files or even a completely separate interface code for each platform .

* Development under iOS often required the use of Storyboards or XIB files in XCode, writing code on Objective-C or SWIFT.
* For Android the XML marking files and the code on Java or Kotlin were created.
* Web interfaces turned on HTML/CSS/JavaScript.
* For C ++ applications on various desktop platforms, their specific frameworks and tools were used:
* In Windows these were MFC (Microsoft Foundation Classes), Win32 API with manual drawing elements or using resource files for dialog windows and control elements.
* Cocoa (Objective-C/Swift) or the old Carbon API for direct control of the graphic interface were used in macos .
* In linux/unix-like systems , libraries like GTK+ or QT were often used, which provided their set of widgets and mechanisms for creating interfaces, often via XML-like marking files (for example, .ui files in Qt Designer) or direct software creation of elements.

This approach ensured maximum control over each platform, allowing you to take into account all its specific features and native elements. However, he had a huge drawback: duplication of efforts and tremendous costs for support . The slightest change in design or functionality required the introduction of a right to several, in fact, independent code bases. This turned into a real nightmare for developer teams, slowing down the output of new functions and increasing the likelihood of errors.

declarative miles: a single language for diversity

It was in response to this rapid complication that the declarative miles appeared as the dominant paradigm. Framws like react, vue, swiftui, jetpack compose and others are not just a new way of writing code, but a fundamental shift in thinking.

The main idea of the declarative approach : Instead of saying the system “how” to draw every element (imperative), we describe “what“ we want to see (declarative). We set the properties and condition of the interface, and the framework decides how to best display it on a particular device.

This became possible thanks to the following key advantages:

1. Abstraction from the details of the platform: declarative fraimvorki are specially designed to forget about low -level details of each platform. The developer describes the components and their relationships at a higher level of abstraction, using a single, transferred code.
2. Automatic adaptation and responsiveness: Freimvorki take responsibility for automatic scaling, changing the layout and adaptation of elements to different sizes of screens, pixel density and input methods. This is achieved through the use of flexible layout systems, such as Flexbox or Grid, and concepts similar to “logical pixels” or “DP”.
3. consistency of user experience: Despite the external differences, the declarative approach allows you to maintain a single logic of behavior and interaction throughout the family of devices. This simplifies the testing process and provides more predictable user experience.
4. Acceleration of development and cost reduction: with the same code capable of working on many platforms, significantly is reduced by the time and cost of development and support . Teams can focus on functionality and design, and not on repeated rewriting the same interface.
5. readiness for the future: the ability to abstract from the specifics of current devices makes the declarative code more more resistant to the emergence of new types of devices and form factors . Freimvorki can be updated to support new technologies, and your already written code will receive this support is relatively seamless.

Conclusion

The declarative mile is not just a fashion trend, but the necessary evolutionary step caused by the rapid development of user devices, including the sphere of the Internet of things (IoT) and smart household appliances. It allows developers and designers to create complex, adaptive and uniform interfaces, without drowning in endless specific implementations for each platform. The transition from imperative control over each pixel to the declarative description of the desired state is a recognition that in the world of the future interfaces should be flexible, transferred and intuitive regardless of which screen they are displayed.

Programmers, designers and users need to learn how to live in this new world. The extra details of the Pixel Perfect, designed to a particular device or resolution, lead to unnecessary time costs for development and support. Moreover, such harsh layouts may simply not work on devices with non-standard interfaces, such as limited input TVs, VR and AR shifts, as well as other devices of the future, which we still do not even know about today. Flexibility and adaptability – these are the keys to the creation of successful interfaces in the modern world.

Why do programmers do nothing even with neural networks

Today, neural networks are used everywhere. Programmers use them to generate code, explain other solutions, automate routine tasks, and even create entire applications from scratch. It would seem that this should lead to an increase in efficiency, reducing errors and acceleration of development. But reality is much more prosaic: many still do not succeed. The neural networks do not solve key problems – they only illuminate the depth of ignorance.

full dependence on LLM instead of understanding

The main reason is that many developers are completely relying on LLM, ignoring the need for a deep understanding of the tools with which they work. Instead of studying documentation – a chat request. Instead of analyzing the reasons for the error – copying the decision. Instead of architectural solutions – the generation of components according to the description. All this can work at a superficial level, but as soon as a non -standard task arises, integration with a real project or the need for fine tuning is required, everything is crumbling.

Lack of context and outdated practices

The neural networks generate the code generalized. They do not take into account the specifics of your platform, version of libraries, environmental restrictions or architectural solutions of the project. What is generated often looks plausible, but has nothing to do with the real, supported code. Even simple recommendations may not work if they belong to the outdated version of the framework or use approaches that have long been recognized as ineffective or unsafe. Models do not understand the context – they rely on statistics. This means that errors and antipattterns, popular in open code, will be reproduced again and again.

redundancy, inefficiency and lack of profiling

The code generated AI is often redundant. It includes unnecessary dependencies, duplicates logic, adds abstractions unnecessarily. It turns out an ineffective, heavy structure that is difficult to support. This is especially acute in mobile development, where the size of the gang, response time and energy consumption are critical.

The neural network does not conduct profiling, does not take into account the restrictions of the CPU and GPU, does not care about the leaks of memory. It does not analyze how effective the code is in practice. Optimization is still handmade, requiring analysis and examination. Without it, the application becomes slow, unstable and resource -intensive, even if they look “right” from the point of view of structure.

Vulnerability and a threat to security

Do not forget about safety. There are already known cases when projects partially or fully created using LLM were successfully hacked. The reasons are typical: the use of unsafe functions, lack of verification of input data, errors in the logic of authorization, leakage through external dependencies. The neural network can generate a vulnerable code simply because it was found in open repositories. Without the participation of security specialists and a full -fledged revision, such errors easily become input points for attacks.

The law is pareto and the essence of the flaws

Pareto law works clearly with neural networks: 80% of the result is achieved due to 20% of effort. The model can generate a large amount of code, create the basis of the project, spread the structure, arrange types, connect modules. However, all this can be outdated, incompatible with current versions of libraries or frameworks, and require significant manual revision. Automation here works rather as a draft that needs to be checked, processed and adapted to specific realities of the project.

Caution optimism

Nevertheless, the future looks encouraging. Constant updating of training datasets, integration with current documentation, automated architecture checks, compliance with design and security patterns – all this can radically change the rules of the game. Perhaps in a few years we can really write the code faster, safer and more efficiently, relying on LLM as a real technical co -author. But for now – alas – a lot has to be checked, rewritten and modified manually.

Neural networks are a powerful tool. But in order for him to work for you, and not against you, you need a base, critical thinking and willingness to take control at any time.

Vibe-core tricks: why LLM still does not work with Solid, Dry and Clean

With the development of large language models (LLM), such as ChatGPT, more and more developers use them to generate code, design architecture and accelerate integration. However, with practical application, it becomes noticeable: the classical principles of architecture – Solid, Dry, Clean – get along poorly with the peculiarities of the LLM codgendation.

This does not mean that the principles are outdated – on the contrary, they work perfectly with manual development. But with LLM the approach has to be adapted.

Why llm cannot cope with architectural principles

encapsulation

Incapsulation requires understanding the boundaries between parts of the system, knowledge about the intentions of the developer, as well as follow strict access restrictions. LLM often simplifies the structure, make fields public for no reason or duplicate the implementation. This makes the code more vulnerable to errors and violates the architectural boundaries.

Abstracts and interfaces

Design patterns, such as an abstract factory or strategy, require a holistic view of the system and understanding its dynamics. Models can create an interface without a clear purpose without ensuring its implementation, or violate the connection between layers. The result is an excess or non -functional architecture.

Dry (Donolt Repeat Yourself)

LLM do not seek to minimize the repeating code – on the contrary, it is easier for them to duplicate blocks than to make general logic. Although they can offer refactoring on request, by default models tend to generate “self -sufficient” fragments, even if it leads to redundancy.

Clean Architecture

Clean implies a strict hierarchy, independence from frameworks, directed dependence and minimal connectedness between layers. The generation of such a structure requires a global understanding of the system – and LLM work at the level of probability of words, not architectural integrity. Therefore, the code is mixed, with violation of the directions of dependence and a simplified division into levels.

What works better when working with LLM

Wet instead of Dry
The WET (Write EVERYTHING TWICE) approach is more practical in working with LLM. Duplication of code does not require context from the model of retention, which means that the result is predictable and is easier to correctly correct. It also reduces the likelihood of non -obvious connections and bugs.

In addition, duplication helps to compensate for the short memory of the model: if a certain fragment of logic is found in several places, LLM is more likely to take it into account with further generation. This simplifies accompaniment and increases resistance to “forgetting”.

Simple structures instead of encapsulation

Avoiding complex encapsulation and relying on the direct transmission of data between the parts of the code, you can greatly simplify both generation and debugging. This is especially true with a quick iterative development or creation of MVP.

Simplified architecture

A simple, flat structure of the project with a minimum amount of dependencies and abstractions gives a more stable result during generation. The model adapts such a code easier and less often violates the expected connections between the components.

SDK integration – manually reliable

Most language models are trained on outdated versions of documentation. Therefore, when generating instructions for installing SDK, errors often appear: outdated commands, irrelevant parameters or links to inaccessible resources. Practice shows: it is best to use official documentation and manual tuning, leaving LLM an auxiliary role – for example, generating a template code or adaptation of configurations.

Why are the principles still work – but with manual development

It is important to understand that the difficulties from Solid, Dry and Clean concern the codhegeneration through LLM. When the developer writes the code manually, these principles continue to demonstrate their value: they reduce connectedness, simplify support, increase the readability and flexibility of the project.

This is due to the fact that human thinking is prone to generalization. We are looking for patterns, we bring repeating logic into individual entities, create patterns. Probably, this behavior has evolutionary roots: reducing the amount of information saves cognitive resources.

LLM act differently: they do not experience loads from the volume of data and do not strive for savings. On the contrary, it is easier for them to work with duplicate, fragmented information than to build and maintain complex abstractions. That is why it is easier for them to cope with the code without encapsulation, with repeating structures and minimal architectural severity.

Conclusion

Large language models are a useful tool in development, especially in the early stages or when creating an auxiliary code. But it is important to adapt the approach to them: to simplify the architecture, limit abstraction, avoid complex dependencies and not rely on them when configuring SDK.

The principles of Solid, Dry and Clean are still relevant-but they give the best effect in the hands of a person. When working with LLM, it is reasonable to use a simplified, practical style that allows you to get a reliable and understandable code that is easy to finalize manually. And where LLM forgets – duplication of code helps him to remember.

Porting Surreal Engine C++ to WebAssembly

In this post I will describe how I ported the Surreal Engine game engine to WebAssembly.

Surreal Engine is a game engine that implements most of the functionality of the Unreal Engine 1 engine, famous games on this engine are Unreal Tournament 99, Unreal, Deus Ex, Undying. It belongs to the classic engines that worked mainly in a single-threaded execution environment.

My initial idea was to take on a project that I couldn’t complete in any reasonable amount of time, thus showing my Twitch followers that there are projects that even I can’t do. On my very first stream, I suddenly realized that the task of porting Surreal Engine C++ to WebAssembly using Emscripten was doable.

Surreal Engine Emscripten Demo

A month later I can show my fork and assembly of the engine on WebAssembly:
https://demensdeum.com/demos/SurrealEngine/

The controls are the same as in the original, using the keyboard arrows. Next, I plan to adapt it to mobile controls (touch), add correct lighting and other graphic features of the Unreal Tournament 99 render.

Where to start?

The first thing I want to say is that any project can be ported from C++ to WebAssembly using Emscripten, the only question is how complete the functionality will be. Choose a project whose library ports are already available for Emscripten, in the case of Surreal Engine, you are very lucky, because the engine uses the SDL 2, OpenAL libraries – they are both ported to Emscripten. However, Vulkan is used as a graphics API, which is currently not available for HTML5, work is underway to implement WebGPU, but it is also in the draft stage, and it is also unknown how simple the further port from Vulkan to WebGPU will be, after its full standardization. Therefore, I had to write my own basic OpenGL-ES / WebGL renderer for Surreal Engine.

Building the project

The build system in Surreal Engine is CMake, which also simplifies porting, since Emscripten provides its own native builders – emcmake, emmake.
The Surreal Engine port was based on the code of my last game on WebGL/OpenGL ES and C++ called Death-Mask, because of this the development went much easier, all the necessary build flags were with me, code examples.

One of the most important points in CMakeLists.txt is the build flags for Emscripten, below is an example from the project file:


-s MAX_WEBGL_VERSION=2 \

-s EXCEPTION_DEBUG \

-fexceptions \

--preload-file UnrealTournament/ \

--preload-file SurrealEngine.pk3 \

--bind \

--use-preload-plugins \

-Wall \

-Wextra \

-Werror=return-type \

-s USE_SDL=2 \

-s ASSERTIONS=1 \

-w \

-g4 \

-s DISABLE_EXCEPTION_CATCHING=0 \

-O3 \

--no-heap-copy \

-s ALLOW_MEMORY_GROWTH=1 \

-s EXIT_RUNTIME=1")

The build script itself:


emmake make -j 16

cp SurrealEngine.data /srv/http/SurrealEngine/SurrealEngine.data

cp SurrealEngine.js /srv/http/SurrealEngine/SurrealEngine.js

cp SurrealEngine.wasm /srv/http/SurrealEngine/SurrealEngine.wasm

cp ../buildScripts/Emscripten/index.html /srv/http/SurrealEngine/index.html

cp ../buildScripts/Emscripten/background.png /srv/http/SurrealEngine/background.png

Next, we’ll prepare index.html, which includes the project file system preloader. For posting on the web, I used Unreal Tournament Demo version 338. As you can see from the CMake file, the unpacked game folder was added to the build directory and linked as a preload-file for Emscripten.

Major code changes

Then it was necessary to change the game loop, you can’t run an infinite loop, it leads to the browser freezing, instead you need to use emscripten_set_main_loop, I wrote about this feature in my 2017 note “Porting SDL C++ game to HTML5 (Emscripten)
We change the code for the condition for exiting the while loop to if, then we output the main class of the game engine, which contains the game loop, to the global scope, and write a global function that will call the step of the game loop from the global object:


#include <emscripten.h>

Engine *EMSCRIPTEN_GLOBAL_GAME_ENGINE = nullptr;

void emscripten_game_loop_step() {

	EMSCRIPTEN_GLOBAL_GAME_ENGINE->Run();

}

#endif

After this, you need to make sure that there are no background threads in the application. If there are, then get ready to rewrite them for single-thread execution, or use the phtread library in Emscripten.
The background thread in Surreal Engine is used to play music, the main thread of the engine receives data about the current track, about the need to play music, or its absence, then the background thread via mutex receives a new state and starts playing new music, or pauses. The background thread is also used to buffer music during playback.
My attempts to build Surreal Engine under Emscripten with pthread were unsuccessful, because the SDL2 and OpenAL ports were built without pthread support, and I did not want to rebuild them for the sake of music. Therefore, I moved the background music thread functionality to single-thread execution using a loop. Having removed pthread calls from the C++ code, I moved buffering, music playback to the main thread, so that there were no delays, I increased the buffer by several seconds.

Next I will describe specific implementations of graphics and sound.

Vulkan is not supported!

Yes, Vulkan is not supported in HTML5, although all the advertising brochures point out cross-platform and wide support on platforms as the main advantage of Vulkan. For this reason, I had to write my own basic graphics renderer for a simplified type of OpenGL – ES, it is used on mobile devices, sometimes does not contain fashionable features of modern OpenGL, but it is very well transferred to WebGL, this is what Emscripten implements. Writing a basic tile renderer, bsp rendering, for the simplest display of GUI, and drawing models + maps, was possible in two weeks. This was probably the most difficult part of the project. There is still a lot of work ahead to implement the full functionality of the Surreal Engine rendering, so any help from readers in the form of code and pull requests is welcome.

OpenAL is supported!

It was a great stroke of luck that Surreal Engine uses OpenAL for audio output. After writing a simple hello world in OpenAL and building it in WebAssembly using Emscripten, it became clear to me how simple it all is, and I set out to port the audio.
After several hours of debugging, it became obvious that there are several bugs in the OpenAL implementation of Emscripten, for example, when initializing the reading of the number of mono channels, the method returned an infinite number, and after trying to initialize a vector of infinite size, C++ crashes with the exception vector::length_error.
This was circumvented by hardcoding the number of mono channels to 2048:


		alcGetIntegerv(alDevice, ALC_STEREO_SOURCES, 1, &stereoSources);



#if __EMSCRIPTEN__

		monoSources = 2048; // for some reason Emscripten's OpenAL gives infinite monoSources count, bug?

#endif



Is there a network?

Surreal Engine currently does not support network play, play with bots is supported, but someone is needed to write AI for these bots. Theoretically, it is possible to implement network play on WebAssembly/Emscripten using Websockets.

Conclusion

In conclusion, I would like to say that porting Surreal Engine turned out to be quite smooth due to the use of libraries for which there are Emscripten ports, as well as my previous experience implementing a game in C++ for WebAssembly on Emscripten. Below are links to sources of knowledge, repositories on the topic.
M-M-M-MONSTER KILL!

Also, if you want to help the project, preferably with WebGL/OpenGL ES render code, then write to me in Telegram:
https://t.me/demenscave

Links

https://demensdeum.com/demos/SurrealEngine/
https://github.com/demensdeum/SurrealEngine-Emscripten

https://github.com/dpjudas/SurrealEngine

Flash Forever – Interceptor 2021

Recently, it turned out that Adobe Flash works quite stably under Wine. During a 4-hour stream, I made the game Interceptor 2021, which is a sequel to the game Interceptor 2020, written for the ZX Spectrum.

For those who are not in the know – the Flash technology provided interactivity on the web from 2000 to around 2015. Its shutdown was prompted by an open letter from Steve Jobs, in which he wrote that Flash should be consigned to history because it lagged on the iPhone. Since then, JS has become even more sluggish than Flash, and Flash itself has been wrapped in JS, making it possible to run it on anything thanks to the Ruffle player.

You can play it here:
https://demensdeum.com/demos/Interceptor2021

Video:
https://www.youtube.com/watch?v=-3b5PkBvHQk

Source code:
https://github.com/demensdeum/Interceptor-2021

CRUD repository

In this note I will describe the basic principles of the well-known classic CRUD pattern, implementation in Swift. Swift is an open, cross-platform language available for Windows, Linux, macOS, iOS, Android.

There are many solutions for abstracting the data storage and application logic. One such solution is the CRUD approach, which is an acronym for C – Create, R -Read, U – Update, D – Delete.
Typically, this principle is implemented by implementing an interface to the database, in which elements are handled using a unique identifier, such as id. An interface is created for each CRUD letter – Create(object, id), Read(id), Update(object, id), Delete(object, id).
If the object contains an id inside itself, then the id argument can be omitted in some methods (Create, Update, Delete), since the entire object is passed there along with its – id field. But for – Read, an id is required, since we want to get the object from the database by identifier.

All names are fictitious

Let’s imagine that the hypothetical AssistantAI application was created using the free EtherRelm database SDK, the integration was simple, the API was very convenient, and the application was eventually released to the markets.
Suddenly, the SDK developer EtherRelm decides to make it paid, setting the price at $100 per year per user of the application.
What? Yes! What should the developers from AssistantAI do now, since they already have 1 million active users! Pay 100 million dollars?
Instead, a decision is made to evaluate the transfer of the application to the native RootData database for the platform; according to programmers, such a transfer will take about six months, without taking into account the implementation of new features in the application. After some thought, a decision is made to remove the application from the markets, rewrite it on another free cross-platform framework with a built-in BueMS database, this will solve the problem with the paid database + simplify development on other platforms.
A year later, the application was rewritten in BueMS, but then suddenly the framework developer decided to make it paid. It turns out that the team fell into the same trap twice, whether they will be able to get out the second time is a completely different story.

Abstraction to the rescue

These problems could have been avoided if developers had used interface abstraction within the application. To the three pillars of OOP – polymorphism, encapsulation, inheritance, not long ago another one was added – abstraction.
Data abstraction allows you to describe ideas and models in general terms, with a minimum of detail, while being precise enough to implement specific implementations that are used to solve business problems.
How can we abstract the work with the database so that the application logic does not depend on it? Let’s use the CRUD approach!

A simplified UML CRUD diagram looks like this:

Example with a fictitious EtherRelm database:

Example with a real SQLite database:

As you have already noticed, when switching a database, only it changes, the CRUD interface with which the application interacts remains unchanged. CRUD is a variant of the GoF pattern implementation – Adapter, since with its help we adapt the application interfaces to any database, combine incompatible interfaces.
Words are empty, show me the code
To implement abstractions in programming languages, interfaces/protocols/abstract classes are used. All of these are the same thing, but during interviews you may be asked to name the difference between them. Personally, I think that there is no particular point in this, since the only purpose of using them is to implement data abstraction, otherwise it is a test of the interviewee’s memory.
CRUD is often implemented within the Repository pattern, but a repository may or may not implement the CRUD interface, depending on the developer’s ingenuity.

Let’s look at a fairly typical Swift code for a Book structure repository that works directly with the UserDefaults database:


struct Book: Codable {
	let title: String
	let author: String
}

class BookRepository {
	func save(book: Book) {
    		let record = try! JSONEncoder().encode(book)
    		UserDefaults.standard.set(record, forKey: book.title)
	}
    
	func get(bookWithTitle title: String) -> Book? {
    		guard let data = UserDefaults.standard.data(forKey: title) else { return nil }
    		let book = try! JSONDecoder().decode(Book.self, from: data)
    		return book
	}
    
	func delete(book: Book) {
    		UserDefaults.standard.removeObject(forKey: book.title)
	}
}

let book = Book(title: "Fear and Loathing in COBOL", author: "Sir Edsger ZX Spectrum")
let repository = BookRepository()
repository.save(book: book)
print(repository.get(bookWithTitle: book.title)!)
repository.delete(book: book)
guard repository.get(bookWithTitle: book.title) == nil else {
	print("Error: can't delete Book from repository!")
	exit(1)
}

The code above seems simple, but let’s count the number of violations of the DRY (Do not Repeat Yourself) principle and the cohesion of the code:
Linked to UserDefaults database
Relationship with JSON encoders and decoders – JSONEncoder, JSONDecoder
Coherence with the Book structure, and we need an abstract repository so as not to create a repository class for each structure that we will store in the database (violation of DRY)

I come across such CRUD repository code quite often, it can be used, however, high cohesion, code duplication, lead to the fact that over time its support will become very complicated. This will be especially noticeable when trying to switch to another database, or when changing the internal logic of working with the database in all repositories created in the application.
Instead of duplicating code, keeping high coupling – let’s write a protocol for a CRUD repository, thus abstracting the database interface and the application’s business logic, observing DRY, implementing low coupling:

    typealias Item = Codable
    typealias ItemIdentifier = String
    
    func create<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws
    func read<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier) async throws -> T
    func update<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws
    func delete(id: CRUDRepository.ItemIdentifier) async throws
}

The CRUDRepository protocol describes interfaces and associated data types for further implementation of a specific CRUD repository.

Next, we will write a specific implementation for the UserDefaults database:

    private typealias RecordIdentifier = String
    
    let tableName: String
    let dataTransformer: DataTransformer
    
    init(
   	 tableName: String = "",
   	 dataTransformer: DataTransformer = JSONDataTransformer()
    ) {
   	 self.tableName = tableName
   	 self.dataTransformer = dataTransformer
    }
    
    private func key(id: CRUDRepository.ItemIdentifier) -> RecordIdentifier {
   	 "database_\(tableName)_item_\(id)"
    }
   	 
    private func isExists(id: CRUDRepository.ItemIdentifier) async throws -> Bool {
   	 UserDefaults.standard.data(forKey: key(id: id)) != nil
    }
    
    func create<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws {
   	 let data = try await dataTransformer.encode(item)
   	 UserDefaults.standard.set(data, forKey: key(id: id))
   	 UserDefaults.standard.synchronize()
    }
    
    func read<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier) async throws -> T {
   	 guard let data = UserDefaults.standard.data(forKey: key(id: id)) else {
   		 throw CRUDRepositoryError.recordNotFound(id: id)
   	 }
   	 let item: T = try await dataTransformer.decode(data: data)
   	 return item
    }
    
    func update<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws {
   	 guard try await isExists(id: id) else {
   		 throw CRUDRepositoryError.recordNotFound(id: id)
   	 }
   	 let data = try await dataTransformer.encode(item)
   	 UserDefaults.standard.set(data, forKey: key(id: id))
   	 UserDefaults.standard.synchronize()
    }
    
    func delete(id: CRUDRepository.ItemIdentifier) async throws {
   	 guard try await isExists(id: id) else {
   		 throw CRUDRepositoryError.recordNotFound(id: id)
   	 }
   	 UserDefaults.standard.removeObject(forKey: key(id: id))
   	 UserDefaults.standard.synchronize()
    }
}

The code looks long, but it contains a complete concrete implementation of a CRUD repository with loose coupling, details below.
typealias are added to make the code self-documenting.
Weak coupling and strong coupling
Decoupling from a specific structure (struct) is implemented using the generic T, which in turn must implement the Codable protocols. Codable allows you to convert structures using classes that implement the TopLevelEncoder and TopLevelDecoder protocols, such as JSONEncoder and JSONDecoder, when using basic types (Int, String, Float, etc.) there is no need to write additional code to convert structures.

Decoupling from a specific encoder and decoder is done using abstraction in the DataTransformer protocol:

	func encode<T: Encodable>(_ object: T) async throws -> Data
	func decode<T: Decodable>(data: Data) async throws -> T
}

With the implementation of the data transformer, we implemented an abstraction of the encoder and decoder interfaces, implementing loose coupling to ensure work with different types of data formats.

The following is the code for a specific DataTransformer, namely for JSON:

	func encode<T>(_ object: T) async throws -> Data where T : Encodable {
    		let data = try JSONEncoder().encode(object)
    		return data
	}
    
	func decode<T>(data: Data) async throws -> T where T : Decodable {
    		let item: T = try JSONDecoder().decode(T.self, from: data)
    		return item
	}
}

Was it possible?

What has changed? Now it is enough to initialize a specific repository to work with any structure that implements the Codable protocol, thus eliminating the need for code duplication, and implementing weak application coupling.

An example of client CRUD with a specific repository, UserDefaults acts as a database, the data format is JSON, the structure is Client, also an example of writing and reading an array:


print("One item access example")

do {
	let clientRecordIdentifier = "client"
	let clientOne = Client(name: "Chill Client")
	let repository = UserDefaultsRepository(
    	tableName: "Clients Database",
    	dataTransformer: JSONDataTransformer()
	)
	try await repository.create(id: clientRecordIdentifier, item: clientOne)
	var clientRecord: Client = try await repository.read(id: clientRecordIdentifier)
	print("Client Name: \(clientRecord.name)")
	clientRecord.name = "Busy Client"
	try await repository.update(id: clientRecordIdentifier, item: clientRecord)
	let updatedClient: Client = try await repository.read(id: clientRecordIdentifier)
	print("Updated Client Name: \(updatedClient.name)")
	try await repository.delete(id: clientRecordIdentifier)
	let removedClientRecord: Client = try await repository.read(id: clientRecordIdentifier)
	print(removedClientRecord)
}
catch {
	print(error.localizedDescription)
}

print("Array access example")

let clientArrayRecordIdentifier = "clientArray"
let clientOne = Client(name: "Chill Client")
let repository = UserDefaultsRepository(
	tableName: "Clients Database",
	dataTransformer: JSONDataTransformer()
)
let array = [clientOne]
try await repository.create(id: clientArrayRecordIdentifier, item: array)
let savedArray: [Client] = try await repository.read(id: clientArrayRecordIdentifier)
print(savedArray.first!)

During the first CRUD check, exception handling has been implemented, in which case reading of the remote item will no longer be available.

Switching databases

Now I will show how to transfer the current code to another database. For example, I will take the code of the SQLite repository that ChatGPT generated:


class SQLiteRepository: CRUDRepository {
    private typealias RecordIdentifier = String
    
    let tableName: String
    let dataTransformer: DataTransformer
    private var db: OpaquePointer?

    init(
   	 tableName: String,
   	 dataTransformer: DataTransformer = JSONDataTransformer()
    ) {
   	 self.tableName = tableName
   	 self.dataTransformer = dataTransformer
   	 self.db = openDatabase()
   	 createTableIfNeeded()
    }
    
    private func openDatabase() -> OpaquePointer? {
   	 var db: OpaquePointer? = nil
   	 let fileURL = try! FileManager.default
   		 .url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false)
   		 .appendingPathComponent("\(tableName).sqlite")
   	 if sqlite3_open(fileURL.path, &db) != SQLITE_OK {
   		 print("error opening database")
   		 return nil
   	 }
   	 return db
    }
    
    private func createTableIfNeeded() {
   	 let createTableString = """
   	 CREATE TABLE IF NOT EXISTS \(tableName) (
   	 id TEXT PRIMARY KEY NOT NULL,
   	 data BLOB NOT NULL
   	 );
   	 """
   	 var createTableStatement: OpaquePointer? = nil
   	 if sqlite3_prepare_v2(db, createTableString, -1, &createTableStatement, nil) == SQLITE_OK {
   		 if sqlite3_step(createTableStatement) == SQLITE_DONE {
       		 print("\(tableName) table created.")
   		 } else {
       		 print("\(tableName) table could not be created.")
   		 }
   	 } else {
   		 print("CREATE TABLE statement could not be prepared.")
   	 }
   	 sqlite3_finalize(createTableStatement)
    }
    
    private func isExists(id: CRUDRepository.ItemIdentifier) async throws -> Bool {
   	 let queryStatementString = "SELECT data FROM \(tableName) WHERE id = ?;"
   	 var queryStatement: OpaquePointer? = nil
   	 if sqlite3_prepare_v2(db, queryStatementString, -1, &queryStatement, nil) == SQLITE_OK {
   		 sqlite3_bind_text(queryStatement, 1, id, -1, nil)
   		 if sqlite3_step(queryStatement) == SQLITE_ROW {
       		 sqlite3_finalize(queryStatement)
       		 return true
   		 } else {
       		 sqlite3_finalize(queryStatement)
       		 return false
   		 }
   	 } else {
   		 print("SELECT statement could not be prepared.")
   		 throw CRUDRepositoryError.databaseError
   	 }
    }
    
    func create<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws {
   	 let insertStatementString = "INSERT INTO \(tableName) (id, data) VALUES (?, ?);"
   	 var insertStatement: OpaquePointer? = nil
   	 if sqlite3_prepare_v2(db, insertStatementString, -1, &insertStatement, nil) == SQLITE_OK {
   		 let data = try await dataTransformer.encode(item)
   		 sqlite3_bind_text(insertStatement, 1, id, -1, nil)
   		 sqlite3_bind_blob(insertStatement, 2, (data as NSData).bytes, Int32(data.count), nil)
   		 if sqlite3_step(insertStatement) == SQLITE_DONE {
       		 print("Successfully inserted row.")
   		 } else {
       		 print("Could not insert row.")
       		 throw CRUDRepositoryError.databaseError
   		 }
   	 } else {
   		 print("INSERT statement could not be prepared.")
   		 throw CRUDRepositoryError.databaseError
   	 }
   	 sqlite3_finalize(insertStatement)
    }
    
    func read<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier) async throws -> T {
   	 let queryStatementString = "SELECT data FROM \(tableName) WHERE id = ?;"
   	 var queryStatement: OpaquePointer? = nil
   	 var item: T?
   	 if sqlite3_prepare_v2(db, queryStatementString, -1, &queryStatement, nil) == SQLITE_OK {
   		 sqlite3_bind_text(queryStatement, 1, id, -1, nil)
   		 if sqlite3_step(queryStatement) == SQLITE_ROW {
       		 let queryResultCol1 = sqlite3_column_blob(queryStatement, 0)
       		 let queryResultCol1Length = sqlite3_column_bytes(queryStatement, 0)
       		 let data = Data(bytes: queryResultCol1, count: Int(queryResultCol1Length))
       		 item = try await dataTransformer.decode(data: data)
   		 } else {
       		 throw CRUDRepositoryError.recordNotFound(id: id)
   		 }
   	 } else {
   		 print("SELECT statement could not be prepared")
   		 throw CRUDRepositoryError.databaseError
   	 }
   	 sqlite3_finalize(queryStatement)
   	 return item!
    }
    
    func update<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws {
   	 guard try await isExists(id: id) else {
   		 throw CRUDRepositoryError.recordNotFound(id: id)
   	 }
   	 let updateStatementString = "UPDATE \(tableName) SET data = ? WHERE id = ?;"
   	 var updateStatement: OpaquePointer? = nil
   	 if sqlite3_prepare_v2(db, updateStatementString, -1, &updateStatement, nil) == SQLITE_OK {
   		 let data = try await dataTransformer.encode(item)
   		 sqlite3_bind_blob(updateStatement, 1, (data as NSData).bytes, Int32(data.count), nil)
   		 sqlite3_bind_text(updateStatement, 2, id, -1, nil)
   		 if sqlite3_step(updateStatement) == SQLITE_DONE {
       		 print("Successfully updated row.")
   		 } else {
       		 print("Could not update row.")
       		 throw CRUDRepositoryError.databaseError
   		 }
   	 } else {
   		 print("UPDATE statement could not be prepared.")
   		 throw CRUDRepositoryError.databaseError
   	 }
   	 sqlite3_finalize(updateStatement)
    }
    
    func delete(id: CRUDRepository.ItemIdentifier) async throws {
   	 guard try await isExists(id: id) else {
   		 throw CRUDRepositoryError.recordNotFound(id: id)
   	 }
   	 let deleteStatementString = "DELETE FROM \(tableName) WHERE id = ?;"
   	 var deleteStatement: OpaquePointer? = nil
   	 if sqlite3_prepare_v2(db, deleteStatementString, -1, &deleteStatement, nil) == SQLITE_OK {
   		 sqlite3_bind_text(deleteStatement, 1, id, -1, nil)
   		 if sqlite3_step(deleteStatement) == SQLITE_DONE {
       		 print("Successfully deleted row.")
   		 } else {
       		 print("Could not delete row.")
       		 throw CRUDRepositoryError.databaseError
   		 }
   	 } else {
   		 print("DELETE statement could not be prepared.")
   		 throw CRUDRepositoryError.databaseError
   	 }
   	 sqlite3_finalize(deleteStatement)
    }
}

Or the CRUD repository code for the file system, which was also generated by ChatGPT:


class FileSystemRepository: CRUDRepository {
	private typealias RecordIdentifier = String
    
	let directoryName: String
	let dataTransformer: DataTransformer
	private let fileManager = FileManager.default
	private var directoryURL: URL
    
	init(
    	directoryName: String = "Database",
    	dataTransformer: DataTransformer = JSONDataTransformer()
	) {
    	self.directoryName = directoryName
    	self.dataTransformer = dataTransformer
   	 
    	let paths = fileManager.urls(for: .documentDirectory, in: .userDomainMask)
    	directoryURL = paths.first!.appendingPathComponent(directoryName)
   	 
    	if !fileManager.fileExists(atPath: directoryURL.path) {
        	try? fileManager.createDirectory(at: directoryURL, withIntermediateDirectories: true, attributes: nil)
    	}
	}
    
	private func fileURL(id: CRUDRepository.ItemIdentifier) -> URL {
    	return directoryURL.appendingPathComponent("item_\(id).json")
	}
    
	private func isExists(id: CRUDRepository.ItemIdentifier) async throws -> Bool {
    	return fileManager.fileExists(atPath: fileURL(id: id).path)
	}
    
	func create<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws {
    	let data = try await dataTransformer.encode(item)
    	let url = fileURL(id: id)
    	try data.write(to: url)
	}
    
	func read<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier) async throws -> T {
    	let url = fileURL(id: id)
    	guard let data = fileManager.contents(atPath: url.path) else {
        	throw CRUDRepositoryError.recordNotFound(id: id)
    	}
    	let item: T = try await dataTransformer.decode(data: data)
    	return item
	}
    
	func update<T: CRUDRepository.Item>(id: CRUDRepository.ItemIdentifier, item: T) async throws {
    	guard try await isExists(id: id) else {
        	throw CRUDRepositoryError.recordNotFound(id: id)
    	}
    	let data = try await dataTransformer.encode(item)
    	let url = fileURL(id: id)
    	try data.write(to: url)
	}
    
	func delete(id: CRUDRepository.ItemIdentifier) async throws {
    	guard try await isExists(id: id) else {
        	throw CRUDRepositoryError.recordNotFound(id: id)
    	}
    	let url = fileURL(id: id)
    	try fileManager.removeItem(at: url)
	}
}

Replacing the repository in the client code:


print("One item access example")

do {
	let clientRecordIdentifier = "client"
	let clientOne = Client(name: "Chill Client")
	let repository = FileSystemRepository(
    	directoryName: "Clients Database",
    	dataTransformer: JSONDataTransformer()
	)
	try await repository.create(id: clientRecordIdentifier, item: clientOne)
	var clientRecord: Client = try await repository.read(id: clientRecordIdentifier)
	print("Client Name: \(clientRecord.name)")
	clientRecord.name = "Busy Client"
	try await repository.update(id: clientRecordIdentifier, item: clientRecord)
	let updatedClient: Client = try await repository.read(id: clientRecordIdentifier)
	print("Updated Client Name: \(updatedClient.name)")
	try await repository.delete(id: clientRecordIdentifier)
	let removedClientRecord: Client = try await repository.read(id: clientRecordIdentifier)
	print(removedClientRecord)
}
catch {
	print(error.localizedDescription)
}

print("Array access example")

let clientArrayRecordIdentifier = "clientArray"
let clientOne = Client(name: "Chill Client")
let repository = FileSystemRepository(
	directoryName: "Clients Database",
	dataTransformer: JSONDataTransformer()
)
let array = [clientOne]
try await repository.create(id: clientArrayRecordIdentifier, item: array)
let savedArray: [Client] = try await repository.read(id: clientArrayRecordIdentifier)
print(savedArray.first!)

Initialization of UserDefaultsRepository has been replaced with FileSystemRepository, with appropriate arguments.
After running the second version of the client code, you will find a directory “Clients Database” in the documents folder, which will contain a file of a serialized array in JSON with one Client structure.

Switching the data storage format

Now let’s ask ChatGPT to generate an encoder and decoder for XML:

	let formatExtension = "xml"
    
	func encode<T: Encodable>(_ item: T) async throws -> Data {
    	let encoder = PropertyListEncoder()
    	encoder.outputFormat = .xml
    	return try encoder.encode(item)
	}
    
	func decode<T: Decodable>(data: Data) async throws -> T {
    	let decoder = PropertyListDecoder()
    	return try decoder.decode(T.self, from: data)
	}
}

Thanks to Swift’s built-in types, the task becomes elementary for a neural network.

Replace JSON with XML in the client code:


print("One item access example")

do {
	let clientRecordIdentifier = "client"
	let clientOne = Client(name: "Chill Client")
	let repository = FileSystemRepository(
    	directoryName: "Clients Database",
    	dataTransformer: XMLDataTransformer()
	)
	try await repository.create(id: clientRecordIdentifier, item: clientOne)
	var clientRecord: Client = try await repository.read(id: clientRecordIdentifier)
	print("Client Name: \(clientRecord.name)")
	clientRecord.name = "Busy Client"
	try await repository.update(id: clientRecordIdentifier, item: clientRecord)
	let updatedClient: Client = try await repository.read(id: clientRecordIdentifier)
	print("Updated Client Name: \(updatedClient.name)")
	try await repository.delete(id: clientRecordIdentifier)
	let removedClientRecord: Client = try await repository.read(id: clientRecordIdentifier)
	print(removedClientRecord)
}
catch {
	print(error.localizedDescription)
}

print("Array access example")

let clientArrayRecordIdentifier = "clientArray"
let clientOne = Client(name: "Chill Client")
let repository = FileSystemRepository(
	directoryName: "Clients Database",
	dataTransformer: XMLDataTransformer()
)
let array = [clientOne]
try await repository.create(id: clientArrayRecordIdentifier, item: array)
let savedArray: [Client] = try await repository.read(id: clientArrayRecordIdentifier)
print(savedArray.first!)

The client code has changed only by one expression JSONDataTransformer -> XMLDataTransformer

Result

CRUD repositories are one of the design patterns that can be used to implement weak coupling of application architecture components. Another solution is to use ORM (Object-relational mapping), in short, ORM uses an approach in which structures are completely mapped to the database, and then changes with models should be displayed (mapped (!)) to the database.
But that’s a completely different story.

A complete implementation of CRUD repositories for Swift is available at:
https://gitlab.com/demensdeum/crud-example

By the way, Swift has long been supported outside of macOS, the code from the article was completely written and tested on Arch Linux.

Sources

https://developer.apple.com/documentation/combine/topleveldecoder
https://developer.apple.com/documentation/combine/toplevelencoder
https://en.wikipedia.org/wiki/Create,_read,_update_and_delete

dd input/output error

What to do if you get an input/output error when copying a normal disk using dd in Linux?

The situation is very sad, but solvable. Most likely, you are dealing with a failed disk containing bad blocks that are no longer usable, writeable, or readable.

Be sure to check such a disk using S.M.A.R.T., most likely it will show you disk errors. This was the case in my case, the number of bad blocks was so huge that I had to say goodbye to the old hard drive and replace it with a new SSD.

The problem was that this disk had a fully working system with licensed software that was necessary for the work. I tried to use partimage to quickly copy data, but suddenly discovered that the utility copies only a third of the disk, then ends with either a segfault or some other funny C/CPlus joke.

Then I tried to copy the data using dd, and it turned out that dd goes to about the same place as partimage, and then an input/output error occurs. At the same time, all sorts of funny flags like conv=noerr, skip or something like that did not help at all.

However, the data was copied to another disk without any problems using a GNU utility called ddrescue.

После этого мои волосы стали шелковистыми, вернулась жена, дети и собака перестала кусать диван.

Большим плюсом ddrescue является наличие встроенного прогрессбара, поэтому не приходится костылять какие-то ухищрения навроде pv и всяких не особо красивых флажков dd. Также ddrescure показывает количество попыток прочитать данные; еще на вики написано что утилита обладает каким-то сверх алгоритмом для считывания поврежденных данных, оставим это на проверку людям которые любят ковыряться в исходниках, мы же не из этих да?

https://ru.wikipedia.org/wiki/Ddrescue
https://www.gnu.org/software/ddrescue/ddrescue_ru.html

ChatGPT

Hello everyone! In this article, I want to talk about ChatGPT – a powerful language modeling tool from OpenAI that can help with various tasks related to text processing. I will show how this tool works and how it can be used in practical situations. Let’s get started!

ChatGPT is currently one of the world’s best neural network language models. It was created to help developers create intelligent systems that can generate natural language and communicate with people in it.

One of the key benefits of ChatGPT is its ability to contextually model text. This means that the model takes into account previous dialogue and uses it to better understand the situation and generate a more natural response.

You can use ChatGPT to solve various tasks such as customer support automation, chatbot creation, text generation and much more.

The neural networks behind ChatGPT have been trained on huge amounts of text to ensure high prediction accuracy. This allows the model to generate natural text that can support conversations and answer questions.

With ChatGPT, you can create your own chatbots and other intelligent systems that can interact with people using natural language. This can be especially useful in industries such as tourism, retail, and customer support.

In conclusion, ChatGPT is a powerful tool for solving various language modeling problems. Its context modeling capabilities make it especially useful for building chatbots and intelligent systems.


In fact, ChatGPT wrote everything above completely herself. What? Yes? I’m shocked myself!

You can try the grid itself here:
https://chat.openai.com/chat

Turn on USB keyboard backlight on macOS

Recently bought a very inexpensive Getorix GK-45X USB keyboard with RGB backlighting. After connecting it to a MacBook Pro with an M1 processor, it became clear that the RGB backlighting was not working. Even pressing the magic combination Fn + Scroll Lock did not turn on the backlighting, only the level of the MacBook screen backlighting changed.
There are several solutions to this problem, namely OpenRGB (does not work), HID LED Test (does not work). Only the kvmswitch utility worked:
https://github.com/stoutput/OSX-KVM

You need to download it from GitHub and allow it to run from the terminal in the Security panel of the System Settings.
As I understood from the description, after launching the utility sends a press of Fn + Scroll Lock, thus turning on/off the backlight on the keyboard.

Tree sort

Tree sort – sorting by binary search tree. Time complexity – O(n²). In such a tree, each node has numbers on the left less than the node, on the right greater than the node, when coming from the root and printing values ​​from left to right, we get a sorted list of numbers. Amazing, right?

Let’s consider the binary search tree diagram:

Derrick Coetzee (public domain)

Try manually reading the numbers starting from the second to last left node in the lower left corner, for each node on the left – a node on the right.

It will look like this:

  1. The second to last node on the bottom left is 3.
  2. It has a left branch – 1.
  3. We take this number (1)
  4. Next we take the vertex itself 3 (1, 3)
  5. On the right is branch 6, but it contains branches. Therefore, we read it in the same way.
  6. Left branch of node 6 is number 4 (1, 3, 4)
  7. The node itself is 6 (1, 3, 4, 6)
  8. Right 7 (1, 3, 4, 6, 7)
  9. We go up to the root node – 8 (1,3, 4,6, 7, 8)
  10. Print everything on the right by analogy
  11. We get the final list – 1, 3, 4, 6, 7, 8, 10, 13, 14

To implement the algorithm in code, two functions are required:

  1. Building a Binary Search Tree
  2. Print out the binary search tree in the correct order

A binary search tree is assembled in the same way as it is read, each node is assigned a number on the left or right, depending on whether it is smaller or larger.

Example in Lua:


function Node:new(value, lhs, rhs)
    output = {}
    setmetatable(output, self)
    self.__index = self  
    output.value = value
    output.lhs = lhs
    output.rhs = rhs
    output.counter = 1
    return output  
end

function Node:Increment()
    self.counter = self.counter + 1
end

function Node:Insert(value)
    if self.lhs ~= nil and self.lhs.value > value then
        self.lhs:Insert(value)
        return
    end

    if self.rhs ~= nil and self.rhs.value < value then
        self.rhs:Insert(value)
        return
    end

    if self.value == value then
        self:Increment()
        return
    elseif self.value > value then
        if self.lhs == nil then
            self.lhs = Node:new(value, nil, nil)
        else
            self.lhs:Insert(value)
        end
        return
    else
        if self.rhs == nil then
            self.rhs = Node:new(value, nil, nil)
        else
            self.rhs:Insert(value)
        end
        return
    end
end

function Node:InOrder(output)
    if self.lhs ~= nil then
       output = self.lhs:InOrder(output)
    end
    output = self:printSelf(output)
    if self.rhs ~= nil then
        output = self.rhs:InOrder(output)
    end
    return output
end

function Node:printSelf(output)
    for i=0,self.counter-1 do
        output = output .. tostring(self.value) .. " "
    end
    return output
end

function PrintArray(numbers)
    output = ""
    for i=0,#numbers do
        output = output .. tostring(numbers[i]) .. " "
    end    
    print(output)
end

function Treesort(numbers)
    rootNode = Node:new(numbers[0], nil, nil)
    for i=1,#numbers do
        rootNode:Insert(numbers[i])
    end
    print(rootNode:InOrder(""))
end


numbersCount = 10
maxNumber = 9

numbers = {}

for i=0,numbersCount-1 do
    numbers[i] = math.random(0, maxNumber)
end

PrintArray(numbers)
Treesort(numbers)

Важный нюанс что для чисел которые равны вершине придумано множество интересных механизмов подцепления к ноде, я же просто добавил счетчик к классу вершины, при распечатке числа возвращаются по счетчику.

Ссылки

https://gitlab.com/demensdeum/algorithms/-/tree/master/sortAlgorithms/treesort

Источники

TreeSort Algorithm Explained and Implemented with Examples in Java | Sorting Algorithms | Geekific – YouTube

Tree sort – YouTube

Convert Sorted Array to Binary Search Tree (LeetCode 108. Algorithm Explained) – YouTube

Sorting algorithms/Tree sort on a linked list – Rosetta Code

Tree Sort – GeeksforGeeks

Tree sort – Wikipedia

How to handle duplicates in Binary Search Tree? – GeeksforGeeks

Tree Sort | GeeksforGeeks – YouTube

Bucket Sort

Bucket Sort – sorting by buckets. The algorithm is similar to counting sort, with the difference that the numbers are collected in “buckets”-ranges, then the buckets are sorted using any other, sufficiently productive, sorting algorithm, and the final chord is the unfolding of the “buckets” one by one, resulting in a sorted list.

The algorithm’s time complexity is O(nk). The algorithm works in linear time for data that obeys a uniform distribution law. To put it simply, the elements must be in a certain range, without “spikes”, for example, numbers from 0.0 to 1.0. If among such numbers there are 4 or 999, then such a series is no longer considered “even” according to the yard laws.

Example of implementation in Julia:

    buckets = Vector{Vector{Int}}()
    
    for i in 0:bucketsCount - 1
        bucket = Vector{Int}()
        push!(buckets, bucket)
    end

    maxNumber = maximum(numbers)

    for i in 0:length(numbers) - 1
        bucketIndex = 1 + Int(floor(bucketsCount * numbers[1 + i] / (maxNumber + 1)))
        push!(buckets[bucketIndex], numbers[1 + i])
    end

    for i in 0:length(buckets) - 1
        bucketIndex = 1 + i
        buckets[bucketIndex] = sort(buckets[bucketIndex])
    end

    flat = [(buckets...)...]
    print(flat, "\n")

end

numbersCount = 10
maxNumber = 10
numbers = rand(1:maxNumber, numbersCount)
print(numbers,"\n")
bucketsCount = 10
bucketSort(numbers, bucketsCount)

На производительность алгоритма также влияет число ведер, для большего количества чисел лучше взять большее число ведер (Algorithms in a nutshell by George T. Heineman)

Ссылки

https://gitlab.com/demensdeum/algorithms/-/tree/master/sortAlgorithms/bucketSort

Источники

https://www.youtube.com/watch?v=VuXbEb5ywrU
https://www.youtube.com/watch?v=ELrhrrCjDOA
https://medium.com/karuna-sehgal/an-introduction-to-bucket-sort-62aa5325d124
https://www.geeksforgeeks.org/bucket-sort-2/
https://ru.wikipedia.org/wiki/%D0%91%D0%BB%D0%BE%D1%87%D0%BD%D0%B0%D1%8F_%D1%81%D0%BE%D1%80%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%BA%D0%B0
https://www.youtube.com/watch?v=LPrF9yEKTks
https://en.wikipedia.org/wiki/Bucket_sort
https://julialang.org/
https://www.oreilly.com/library/view/algorithms-in-a/9780596516246/ch04s08.html

Radixsort

Radix Sort is a radix sort. The algorithm is similar to counting sort in that there is no comparison of elements, instead, elements are grouped *symbolically* into *buckets*, the bucket is selected by the index of the current number-symbol. Time complexity is O(nd).

It works something like this:

  • As input we receive the numbers 6, 12, 44, 9
  • Let’s create 10 list buckets (0-9) into which we will add/sort numbers bitwise.

Next:

  1. Start a loop with counter i until the maximum number of characters in the number
  2. According to the index i from right to left we get one symbol for each number, if there is no symbol, we consider it zero
  3. Convert the symbol to a number
  4. We select a bucket by index – number, put the number there in full
  5. After we finish iterating over the numbers, we transform all the buckets back into a list of numbers
  6. We get numbers sorted by rank
  7. Repeat until all the digits are gone

Radix Sort example in Scala:


import scala.util.Random.nextInt



object RadixSort {

    def main(args: Array[String]) = {

        var maxNumber = 200

        var numbersCount = 30

        var maxLength = maxNumber.toString.length() - 1



        var referenceNumbers = LazyList.continually(nextInt(maxNumber + 1)).take(numbersCount).toList

        var numbers = referenceNumbers

        

        var buckets = List.fill(10)(ListBuffer[Int]())



        for( i <- 0 to maxLength) { numbers.foreach( number => {

                    var numberString = number.toString

                    if (numberString.length() > i) {

                        var index = numberString.length() - i - 1

                        var character = numberString.charAt(index).toString

                        var characterInteger = character.toInt  

                        buckets.apply(characterInteger) += number

                    }

                    else {

                        buckets.apply(0) += number

                    }

                }

            )

            numbers = buckets.flatten

            buckets.foreach(x => x.clear())

        }

        println(referenceNumbers)

        println(numbers)

        println(s"Validation result: ${numbers == referenceNumbers.sorted}")

    }

}

The algorithm also has a version for parallel execution, for example on a GPU; there is also a bit sorting variant, which is probably very interesting and truly breathtaking!

Links

https://gitlab .com/demensdeum/algorithms/-/blob/master/sortAlgorithms/radixSort/radixSort.scala

Sources

https://ru.wikipedia.org/wiki/%D0%9F%D0%BE%D1%80%D0%B0%D0%B7%D1%80%D1%8F%D 0%B4%D0%BD%D0%B0%D1%8F_%D1%81%D0%BE%D1%80%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0% BA%D0%B0
https://www.geeksforgeeks.org/radix-sort/

https://www.youtube.com/watch?v=toAlAJKojos

https://github.com/gyatskov/radix-sort

Heapsort

Heapsort is a pyramid sort. The time complexity of the algorithm is O(n log n), fast, right? I would call this sorting – sorting of falling stones. It seems to me that the easiest way to explain it is visually.

The input is a list of numbers, for example:
5, 0, 7, 2, 3, 9, 4

From left to right, a data structure is made – a binary tree, or as I call it – a pyramid. Pyramid elements can have a maximum of two child elements, but only one top element.

Let’s make a binary tree:
⠀⠀5
⠀0⠀7
2 3 9 4

If you look at the pyramid for a long time, you can see that it is simply numbers from an array, following one after another, the number of elements on each floor is multiplied by two.

Next comes the most interesting part, we sort the pyramid from the bottom up, using the falling pebbles method (heapify). Sorting could start from the last floor (2 3 9 4), but there is no point because there is no floor below to fall to.

Therefore, we begin to drop elements from the penultimate floor (0 7)
⠀⠀5
⠀0⠀7
2 3 9 4

The first element to fall is chosen from the right, in our case it is 7, then we look at what is under it, and under it is 9 and 4, nine is greater than four, and nine is also greater than seven! We drop 7 on 9, and lift 9 to the place of 7.
⠀⠀5
⠀0⠀9
2 3 7 4

Next, we understand that the seven has nowhere to fall lower, we move on to the number 0, which is on the penultimate floor on the left:
⠀⠀5
0⠀9
2 3 7 4

We look at what’s underneath it – 2 and 3, two is less than three, three is greater than zero, so we swap zero and three:
⠀⠀5
⠀3⠀9
2 0 7 4

When you reach the end of the floor, go to the floor above and drop everything there if you can.
The result will be a data structure – a heap, namely a max heap, since the largest element is at the top:
⠀⠀9
⠀3⠀7
2 0 5 4

If you return to the array representation, you will get a list:
[9, 3, 7, 2, 0, 5, 4]

From this we can conclude that by swapping the first and last elements, we will get the first number in the final sorted position, namely 9 should be at the end of the sorted list, swap them:
[4, 3, 7, 2, 0, 5, 9]

Let’s look at a binary tree:
⠀⠀4
⠀3⠀7
2 0 5 9

We have a situation where the bottom of the tree is sorted, we just need to drop 4 to the correct position, we repeat the algorithm, but we do not take into account the already sorted numbers, namely 9:
⠀⠀4
⠀3⠀7
2 0 5 9

⠀⠀7
⠀3⠀4
2 0 5 9

⠀⠀7
⠀3⠀5
2 0 4 9

It turned out that, having dropped 4, we raised the next largest number after 9 – 7. We swap the last unsorted number (4) and the largest number (7)
⠀⠀4
⠀3⠀5
2 0 7 9

It turns out that now we have two numbers in the correct final position:
4, 3, 5, 2, 0, 7, 9

Then we repeat the sorting algorithm, ignoring those already sorted, and as a result we get a heap of the following type:
⠀⠀0
⠀2⠀3
4 5 7 9

Or as a list:
0, 2, 3, 4, 5, 7, 9

Implementation

The algorithm is usually divided into three functions:

  1. Creating a heap
  2. Heapify Algorithm
  3. Replace the last unsorted element with the first

The heap is created by going through the penultimate row of the binary tree using the heapify function, from right to left until the end of the array. Then the first number swap is made in the loop, after which the first element falls/stays in place, as a result of which the largest element gets to the first place, the loop is repeated with the participants decreasing by one, since after each pass at the end of the list there are sorted numbers.

Heapsort example in Ruby:






module Colors



    BLUE = "\033[94m"



    RED = "\033[31m"



    STOP = "\033[0m"



end







def heapsort(rawNumbers)



    numbers = rawNumbers.dup







    def swap(numbers, from, to)



        temp = numbers[from]



        numbers[from] = numbers[to]



        numbers[to] = temp



    end







    def heapify(numbers)



        count = numbers.length()



        lastParentNode = (count - 2) / 2







        for start in lastParentNode.downto(0)



            siftDown(numbers, start, count - 1)



            start -= 1 



        end







        if DEMO



            puts "--- heapify ends ---"



        end



    end







    def siftDown(numbers, start, rightBound)      



        cursor = start



        printBinaryHeap(numbers, cursor, rightBound)







        def calculateLhsChildIndex(cursor)



            return cursor * 2 + 1



        end







        def calculateRhsChildIndex(cursor)



            return cursor * 2 + 2



        end            







        while calculateLhsChildIndex(cursor) <= rightBound



            lhsChildIndex = calculateLhsChildIndex(cursor)



            rhsChildIndex = calculateRhsChildIndex(cursor)







            lhsNumber = numbers[lhsChildIndex]



            biggerChildIndex = lhsChildIndex







            if rhsChildIndex <= rightBound



                rhsNumber = numbers[rhsChildIndex]



                if lhsNumber < rhsNumber



                    biggerChildIndex = rhsChildIndex



                end



            end







            if numbers[cursor] < numbers[biggerChildIndex]



                swap(numbers, cursor, biggerChildIndex)



                cursor = biggerChildIndex



            else



                break



            end



            printBinaryHeap(numbers, cursor, rightBound)



        end



        printBinaryHeap(numbers, cursor, rightBound)



    end







    def printBinaryHeap(numbers, nodeIndex = -1, rightBound = -1)



        if DEMO == false



            return



        end



        perLineWidth = (numbers.length() * 4).to_i



        linesCount = Math.log2(numbers.length()).ceil()



        xPrinterCount = 1



        cursor = 0



        spacing = 3



        for y in (0..linesCount)



            line = perLineWidth.times.map { " " }



            spacing = spacing == 3 ? 4 : 3



            printIndex = (perLineWidth / 2) - (spacing * xPrinterCount) / 2



            for x in (0..xPrinterCount - 1)



                if cursor >= numbers.length



                    break



                end



                if nodeIndex != -1 && cursor == nodeIndex



                    line[printIndex] = "%s%s%s" % [Colors::RED, numbers[cursor].to_s, Colors::STOP]



                elsif rightBound != -1 && cursor > rightBound



                    line[printIndex] = "%s%s%s" % [Colors::BLUE, numbers[cursor].to_s, Colors::STOP]



                else



                    line[printIndex] = numbers[cursor].to_s



                end



                cursor += 1



                printIndex += spacing



            end



            print line.join()



            xPrinterCount *= 2           



            print "\n"            



        end



    end







    heapify(numbers)



    rightBound = numbers.length() - 1







    while rightBound > 0



        swap(numbers, 0, rightBound)   



        rightBound -= 1



        siftDown(numbers, 0, rightBound)     



    end







    return numbers



end







numbersCount = 14



maximalNumber = 10



numbers = numbersCount.times.map { Random.rand(maximalNumber) }



print numbers



print "\n---\n"







start = Time.now



sortedNumbers = heapsort(numbers)



finish = Time.now



heapSortTime = start - finish







start = Time.now



referenceSortedNumbers = numbers.sort()



finish = Time.now



referenceSortTime = start - finish







print "Reference sort: "



print referenceSortedNumbers



print "\n"



print "Reference sort time: %f\n" % referenceSortTime



print "Heap sort:      "



print sortedNumbers



print "\n"



if DEMO == false



    print "Heap sort time:      %f\n" % heapSortTime



else



    print "Disable DEMO for performance measure\n"



end







if sortedNumbers != referenceSortedNumbers



    puts "Validation failed"



    exit 1



else



    puts "Validation success"



    exit 0



end



Without visualization, this algorithm is not easy to understand, so the first thing I recommend is to write a function that will print the current form of the binary tree.

Links

https://gitlab.com/demensdeum/algorithms/-/blob/master/sortAlgorithms/heapsort/heapsort.rb

Sources

http://rosettacode.org/wiki/Sorting_algorithms/Heapsort
https://www.youtube.com/watch?v=LbB357_RwlY

https://habr.com/ru/company/ otus/blog/460087/

https://ru.wikipedia.org/wiki/Пыramidal_sorting

https://neerc.ifmo.ru/wiki /index.php?title=Heap_sort

https://wiki5.ru/wiki/Heapsort

https://wiki.c2.com/?HeapSort

https://ru.wikipedia.org/wiki/Tree (data structure)

https://ru.wikipedia.org/wiki/Heap (data structure)

https://www.youtube.com/watch?v=2DmK_H7IdTo

https://www.youtube.com/watch?v=kU4KBD4NFtw

https://www.youtube.com/watch?v=DU1uG5310x0

https://www.youtube.com/watch?v =BzQGPA_v-vc

https://www.geeksforgeeks.org/ array-representation-of-binary-heap/

https://habr.com/ru/post/112222/

https://www.cs.usfca. edu/~galles/visualization/BST.html

https://www.youtube.com/watch?v=EQzqHWtsKq4

https://medium.com/@dimko1/%D0%B0%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC% D1 %8B-%D1%81%D0%BE%D1%80%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%BA%D0%B8-heapsort-796ba965018b

https://ru.wikibrief.org/wiki/Heapsort

https://www.youtube.com/watch?v=GUUpmrTnNbw

Bumblebee All Troubles

Recently, it turned out that users of modern Nvidia GPUs under Arch Linux do not need to use the bumblebee package at all, for example, for me it did not detect an external monitor when connected. I recommend removing the bumblebee package and all related packages, and installing prime using the instructions on the Arch Wiki.
Next, to launch all games on Steam and 3D applications, add prime-run, for Steam this is done like this prime-run %command% in additional launch options.
To check the correctness, you can use glxgears, prime-run glxgears.
https://bbs.archlinux.org/viewtopic.php? pid=2048195#p2048195

Quicksort

Quicksort is a divide-and-conquer sorting algorithm. Recursively, we sort the array of numbers in parts, placing the numbers in smaller and larger order from the selected pivot element, and insert the pivot element itself into the gap between them. After several recursive iterations, we get a sorted list. Time complexity is O(n2).

Scheme:

  1. We start by getting the list of elements from the outside, the sorting boundaries. In the first step, the sorting boundaries will be from the beginning to the end.
  2. We check that the boundaries of the beginning and end do not intersect, if this happens, then it’s time to finish
  3. We select some element from the list, call it the reference
  4. Move it to the right at the end to the last index so it doesn’t get in the way
  5. Create a counter of *smaller numbers* that is still equal to zero
  6. We go through the list in a loop from left to right, up to the last index where the reference element is located, not inclusive
  7. Each element is compared with the reference
  8. If it is less than the reference, then we swap them by the index of the counter of smaller numbers. We increment the counter of smaller numbers.
  9. When the cycle reaches the reference element, we stop and swap the reference element with the element with the counter of smaller numbers.
  10. We run the algorithm separately for the left smaller part of the list, and separately for the right larger part of the list.
  11. Eventually all recursive iterations will start to stop due to the check in point 2
  12. Get a sorted list

Quicksort was invented by scientist Charles Anthony Richard Hoare at Moscow State University, who, having learned Russian, studied computer translation and probability theory at the Kolmogorov School. In 1960, due to the political crisis, he left the Soviet Union.

Example implementation in Rust:


use rand::Rng;

fn swap(numbers: &mut [i64], from: usize, to: usize) {
    let temp = numbers[from];
    numbers[from] = numbers[to];
    numbers[to] = temp;
}

fn quicksort(numbers: &mut [i64], left: usize, right: usize) {
    if left >= right {
        return
    }
    let length = right - left;
    if length <= 1 {
        return
    }
    let pivot_index = left + (length / 2);
    let pivot = numbers[pivot_index];

    let last_index = right - 1;
    swap(numbers, pivot_index, last_index);

    let mut less_insert_index = left;

    for i in left..last_index {
        if numbers[i] < pivot {
            swap(numbers, i, less_insert_index);
            less_insert_index += 1;
        }
    }
    swap(numbers, last_index, less_insert_index);
    quicksort(numbers, left, less_insert_index);
    quicksort(numbers, less_insert_index + 1, right);
}

fn main() {
    let mut numbers = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
    let mut reference_numbers = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];

    let mut rng = rand::thread_rng();
    for i in 0..numbers.len() {
        numbers[i] = rng.gen_range(-10..10);
        reference_numbers[i] = numbers[i];
    }

    reference_numbers.sort();

  println!("Numbers           {:?}", numbers);
  let length = numbers.len();
  quicksort(&mut numbers, 0, length);
  println!("Numbers           {:?}", numbers);
  println!("Reference numbers {:?}", reference_numbers);

  if numbers != reference_numbers {
    println!("Validation failed");
    std::process::exit(1);
  }
  else {
    println!("Validation success!");
    std::process::exit(0);
  }
}

If nothing is clear, I suggest watching a video by Rob Edwards from the University of San Diego https://www.youtube.com/watch?v=ZHVk2blR45Q it shows the essence and implementation of the algorithm in the simplest way, step by step.

Links

https://gitlab.com/demensdeum /algorithms/-/tree/master/sortAlgorithms/quickSort

Sources

https://www.youtube.com/watch?v =4s-aG6yGGLU
https://www.youtube.com/watch?v=ywWBy6J5gz8
https://www.youtube.com/watch?v=Hoixgm4-P4M
https://ru.wikipedia.org/wiki/Быстрая_сортировка
https://www.youtube.com/watch?v=Hoixgm4-P4M
https://www.youtube.com/watch?v=XE4VP_8Y0BU
https://www.youtube.com/watch?v=MZaf_9IZCrc
https://www.youtube.com/watch?v=ZHVk2blR45Q
http://rosettacode.org/wiki/Sorting_algorithms/Quicksort
https://www.youtube.com/watch?v=4s-aG6yGGLU
https://www.youtube.com/watch?v=dQw4w9WgXcQ
https://www.youtube.com/watch?v=maibrCbZWKw
https://www.geeksforgeeks.org/quick-sort/
https://www.youtube.com/watch?v=uXBnyYuwPe8

Binary Insertion Sort

Binary Insertion Sort is a variant of insertion sort in which the insertion position is determined using binary search. The time complexity of the algorithm is O(n2)

The algorithm works like this:

  1. Starts a loop from zero to the end of the list
  2. In the loop, a number is selected for sorting, the number is saved in a separate variable
  3. Binary search finds the index to insert this number compared to the numbers to the left
  4. Once the index is found, the numbers on the left are shifted one position to the right, starting with the insertion index. The process will erase the number you want to sort.
  5. The previously saved number is inserted at the insertion index
  6. At the end of the loop, the entire list will be sorted

During binary search, a situation is possible when the number will not be found, but the index is not returned. Due to the peculiarity of binary search, the number closest to the desired one will be found, then to return the index it will be necessary to compare it with the desired one, if the desired one is less, then the desired one should be on the left by index, and if it is greater or equal, then on the right.

Go code:


import (
	"fmt"
	"math/rand"
	"time"
)

const numbersCount = 20
const maximalNumber = 100

func binarySearch(numbers []int, item int, low int, high int) int {
	for high > low {
		center := (low + high) / 2
		if numbers[center] < item { low = center + 1 } else if numbers[center] > item {
			high = center - 1
		} else {
			return center
		}
	}

	if numbers[low] < item {
		return low + 1
	} else {
		return low
	}
}

func main() {
	rand.Seed(time.Now().Unix())
	var numbers [numbersCount]int
	for i := 0; i < numbersCount; i++ {
		numbers[i] = rand.Intn(maximalNumber)
	}
	fmt.Println(numbers)

	for i := 1; i < len(numbers); i++ { searchAreaLastIndex := i - 1 insertNumber := numbers[i] insertIndex := binarySearch(numbers[:], insertNumber, 0, searchAreaLastIndex) for x := searchAreaLastIndex; x >= insertIndex; x-- {
			numbers[x+1] = numbers[x]
		}
		numbers[insertIndex] = insertNumber
	}
	fmt.Println(numbers)
}

Links

https://gitlab.com/demensdeum/algorithms/-/blob/master/sortAlgorithms/binaryInsertionSort/binaryInsertionSort.go

Sources

https://www.geeksforgeeks.org/binary-insertion- sort/
https://www.youtube.com/watch?v=-OVB5pOZJug

Shell Sort

Shell Sort is a variant of insertion sorting with preliminary combing of the array of numbers.

We need to remember how insertion sort works:

1. The loop starts from zero to the end of the loop, thus the array is divided into two parts
2. For the left part, the second cycle is started, comparing elements from right to left, the smaller element on the right is omitted until a smaller element on the left is found
3. At the end of both cycles, we get a sorted list

Once upon a time, computer scientist Donald Shell was puzzled by how to improve the insertion sort algorithm. He came up with the idea of ​​also going through the array with two cycles, but at a certain distance, gradually reducing the “comb” until it turns into a regular insertion sort algorithm. Everything is really that simple, no pitfalls, we add another cycle to the two cycles on top, in which we gradually reduce the size of the “comb”. The only thing you will need to do is check the distance when comparing, so that it does not go beyond the array.

A really interesting topic is the choice of the sequence of changing the comparison length at each iteration of the first cycle. It is interesting because the performance of the algorithm depends on it.

A table of known variants and time complexity can be found here: https://en.wikipedia.org/wiki/Shellsort#Gap_sequences

Different people were involved in calculating the ideal distance, apparently they were so interested in the topic. Couldn’t they just run Ruby and call the fastest sort() algorithm?

In general, these strange people wrote dissertations on the topic of calculating the distance/gap of the “comb” for the Shell algorithm. I simply used the results of their work and checked 5 types of sequences, Hibbard, Knuth-Pratt, Chiura, Sedgewick.

import time
import random
from functools import reduce
import math

DEMO_MODE = False

if input("Demo Mode Y/N? ").upper() == "Y":
    DEMO_MODE = True

class Colors:
    BLUE = '\033[94m'
    RED = '\033[31m'
    END = '\033[0m'

def swap(list, lhs, rhs):
    list[lhs], list[rhs] = list[rhs], list[lhs]
    return list

def colorPrintoutStep(numbers: List[int], lhs: int, rhs: int):
    for index, number in enumerate(numbers):
        if index == lhs:
            print(f"{Colors.BLUE}", end = "")
        elif index == rhs:
            print(f"{Colors.RED}", end = "")
        print(f"{number},", end = "")
        if index == lhs or index == rhs:
            print(f"{Colors.END}", end = "")
        if index == lhs or index == rhs:
            print(f"{Colors.END}", end = "")
    print("\n")
    input(">")

def ShellSortLoop(numbers: List[int], distanceSequence: List[int]):
    distanceSequenceIterator = reversed(distanceSequence)
    while distance:= next(distanceSequenceIterator, None):
        for sortArea in range(0, len(numbers)):
            for rhs in reversed(range(distance, sortArea + 1)):
                lhs = rhs - distance
                if DEMO_MODE:
                    print(f"Distance: {distance}")
                    colorPrintoutStep(numbers, lhs, rhs)
                if numbers[lhs] > numbers[rhs]:
                    swap(numbers, lhs, rhs)
                else:
                    break

def ShellSort(numbers: List[int]):
    global ShellSequence
    ShellSortLoop(numbers, ShellSequence)

def HibbardSort(numbers: List[int]):
    global HibbardSequence
    ShellSortLoop(numbers, HibbardSequence)

def ShellPlusKnuttPrattSort(numbers: List[int]):
    global KnuttPrattSequence
    ShellSortLoop(numbers, KnuttPrattSequence)

def ShellPlusCiuraSort(numbers: List[int]):
    global CiuraSequence
    ShellSortLoop(numbers, CiuraSequence)

def ShellPlusSedgewickSort(numbers: List[int]):
    global SedgewickSequence
    ShellSortLoop(numbers, SedgewickSequence)

def insertionSort(numbers: List[int]):
    global insertionSortDistanceSequence
    ShellSortLoop(numbers, insertionSortDistanceSequence)

def defaultSort(numbers: List[int]):
    numbers.sort()

def measureExecution(inputNumbers: List[int], algorithmName: str, algorithm):
    if DEMO_MODE:
        print(f"{algorithmName} started")
    numbers = inputNumbers.copy()
    startTime = time.perf_counter()
    algorithm(numbers)
    endTime = time.perf_counter()
    print(f"{algorithmName} performance: {endTime - startTime}")

def sortedNumbersAsString(inputNumbers: List[int], algorithm) -> str:
    numbers = inputNumbers.copy()
    algorithm(numbers)
    return str(numbers)

if DEMO_MODE:
    maximalNumber = 10
    numbersCount = 10
else:
    maximalNumber = 10
    numbersCount = random.randint(10000, 20000)

randomNumbers = [random.randrange(1, maximalNumber) for i in range(numbersCount)]

ShellSequenceGenerator = lambda n: reduce(lambda x, _: x + [int(x[-1]/2)], range(int(math.log(numbersCount, 2))), [int(numbersCount / 2)])
ShellSequence = ShellSequenceGenerator(randomNumbers)
ShellSequence.reverse()
ShellSequence.pop()

HibbardSequence = [
    0, 1, 3, 7, 15, 31, 63, 127, 255, 511, 1023, 2047, 4095,
    8191, 16383, 32767, 65535, 131071, 262143, 524287, 1048575,
    2097151, 4194303, 8388607, 16777215, 33554431, 67108863, 134217727,
    268435455, 536870911, 1073741823, 2147483647, 4294967295, 8589934591
]

KnuttPrattSequence = [
    1, 4, 13, 40, 121, 364, 1093, 3280, 9841, 29524, 88573, 265720, 
    797161, 2391484, 7174453, 21523360, 64570081, 193710244, 581130733, 
    1743392200, 5230176601, 15690529804, 47071589413
]

CiuraSequence = [
            1, 4, 10, 23, 57, 132, 301, 701, 1750, 4376, 
            10941, 27353, 68383, 170958, 427396, 1068491, 
            2671228, 6678071, 16695178, 41737946, 104344866, 
            260862166, 652155416, 1630388541
]

SedgewickSequence = [
            1, 5, 19, 41, 109, 209, 505, 929, 2161, 3905,
            8929, 16001, 36289, 64769, 146305, 260609, 587521, 
            1045505, 2354689, 4188161, 9427969, 16764929, 37730305, 
            67084289, 150958081, 268386305, 603906049, 1073643521, 
            2415771649, 4294770689, 9663381505, 17179475969
]

insertionSortDistanceSequence = [1]

algorithms = {
    "Default Python Sort": defaultSort,
    "Shell Sort": ShellSort,
    "Shell + Hibbard" : HibbardSort,
    "Shell + Prat, Knutt": ShellPlusKnuttPrattSort,
    "Shell + Ciura Sort": ShellPlusCiuraSort,
    "Shell + Sedgewick Sort": ShellPlusSedgewickSort,
    "Insertion Sort": insertionSort
}

for name, algorithm in algorithms.items():
    measureExecution(randomNumbers, name, algorithm)

reference = sortedNumbersAsString(randomNumbers, defaultSort)

for name, algorithm in algorithms.items():
    if sortedNumbersAsString(randomNumbers, algorithm) != reference:
        print("Sorting validation failed")
        exit(1)

print("Sorting validation success")
exit(0)

In my implementation, for a random set of numbers, the fastest gaps are Sedgewick and Hibbard.

mypy

I would also like to mention the static typing analyzer for Python 3 – mypy. It helps to cope with the problems inherent in languages ​​with dynamic typing, namely, it eliminates the possibility of slipping something where it shouldn’t.

As experienced programmers say, “static typing is not needed when you have a team of professionals”, someday we will all become professionals, we will write code in complete unity and understanding with machines, but for now you can use such utilities and languages ​​with static typing.

Links

https://gitlab.com/demensdeum /algorithms/-/tree/master/sortAlgorithms/shellSort
http://mypy-lang.org/

Sources

https://dl.acm.org/doi/10.1145/368370.368387
https://en.wikipedia.org/wiki/Shellsort
http://rosettacode.org/wiki/Sorting_algorithms/Shell_sort
https://ru.wikipedia.org/wiki/Сортировка_Шелла
https://neerc.ifmo.ru/wiki/index.php?title=Сортировка_Шелла
https://twitter.com/gvanrossum/status/700741601966985216

Double Selection Sort

Double Selection Sort is a type of selection sort that should be twice as fast. The vanilla algorithm goes through a double loop over a list of numbers, finds the minimum number and swaps it with the current digit pointed to by the loop at the level above. Double selection sort looks for the minimum and maximum number, then replaces the two digits pointed to by the loop at the level above – two numbers on the left and right. This whole orgy ends when the cursors of the numbers to be replaced meet in the middle of the list, as a result, sorted numbers are obtained to the left and right of the visual center.
The time complexity of the algorithm is similar to Selection Sort – O(n2), but there is supposedly a 30% speedup.

Borderline state

Already at this stage, you can imagine the moment of collision, for example, when the number of the left cursor (minimum number) will point to the maximum number in the list, then the minimum number is rearranged, the maximum number rearrangement immediately breaks. Therefore, all implementations of the algorithm contain a check for such cases, replacing the indices with correct ones. In my implementation, one check was enough:

  maximalNumberIndex = minimalNumberIndex;
}

Реализация на Cito

Cito – язык либ, язык транслятор. На нем можно писать для C, C++, C#, Java, JavaScript, Python, Swift, TypeScript, OpenCL C, при этом совершенно ничего не зная про эти языки. Исходный код на языке Cito транслируется в исходный код на поддерживаемых языках, далее можно использовать как библиотеку, либо напрямую, исправив сгенеренный код руками. Эдакий Write once – translate to anything.
Double Selection Sort на cito:

{
    public static int[] sort(int[]# numbers, int length)
    {
        int[]# sortedNumbers = new int[length];
        for (int i = 0; i < length; i++) {
            sortedNumbers[i] = numbers[i];
        }
        for (int leftCursor = 0; leftCursor < length / 2; leftCursor++) {
            int minimalNumberIndex = leftCursor;
            int minimalNumber = sortedNumbers[leftCursor];

            int rightCursor = length - (leftCursor + 1);
            int maximalNumberIndex = rightCursor;
            int maximalNumber = sortedNumbers[maximalNumberIndex];

            for (int cursor = leftCursor; cursor <= rightCursor; cursor++) { int cursorNumber = sortedNumbers[cursor]; if (minimalNumber > cursorNumber) {
                    minimalNumber = cursorNumber;
                    minimalNumberIndex = cursor;
                }
                if (maximalNumber < cursorNumber) {
                    maximalNumber = cursorNumber;
                    maximalNumberIndex = cursor;
                }
            }

            if (leftCursor == maximalNumberIndex) {
                maximalNumberIndex = minimalNumberIndex;
            }

            int fromNumber = sortedNumbers[leftCursor];
            int toNumber = sortedNumbers[minimalNumberIndex];
            sortedNumbers[minimalNumberIndex] = fromNumber;
            sortedNumbers[leftCursor] = toNumber;

            fromNumber = sortedNumbers[rightCursor];
            toNumber = sortedNumbers[maximalNumberIndex];
            sortedNumbers[maximalNumberIndex] = fromNumber;
            sortedNumbers[rightCursor] = toNumber;
        }
        return sortedNumbers;
    }
} 

Links

https://gitlab.com/demensdeum /algorithms/-/tree/master/sortAlgorithms/doubleSelectionSort
https://github.com/pfusik/cito

Sources

https://www.researchgate.net/publication/330084245_Improved_Double_Selection_Sort_using_Algorithm
http://algolab.valemak.com/selection-double
https://www.geeksforgeeks.org/sorting-algorithm-slightly-improves-selection-sort/

Cocktail Shaker Sort

Cocktail Shaker Sort – a shaker sort, a variant of the bidirectional bubble sort.
The algorithm works as follows:

  1. The initial direction of iteration in the cycle is selected (usually left-to-right)
  2. Next, the numbers are checked in pairs in the loop
  3. If the next element is larger, they swap places
  4. When finished, the enumeration process starts again with the direction inverted
  5. The enumeration is repeated until there are no more permutations

The time complexity of the algorithm is similar to the bubble – O(n2).

Example of implementation in PHP:

<?php

function cocktailShakeSort($numbers)
{
    echo implode(",", $numbers),"\n";
    $direction = false;
    $sorted = false;
    do {
        $direction = !$direction;        
        $firstIndex = $direction == true ? 0 : count($numbers) - 1;
        $lastIndex = $direction == true ? count($numbers) - 1 : 0;
        
        $sorted = true;
        for (
            $i = $firstIndex;
            $direction == true ? $i < $lastIndex : $i > $lastIndex;
            $direction == true ? $i++ : $i--
        ) {
            $lhsIndex = $direction ? $i : $i - 1;
            $rhsIndex = $direction ? $i + 1 : $i;

            $lhs = $numbers[$lhsIndex];
            $rhs = $numbers[$rhsIndex];

            if ($lhs > $rhs) {
                $numbers[$lhsIndex] = $rhs;
                $numbers[$rhsIndex] = $lhs;
                $sorted = false;
            }
        }
    } while ($sorted == false);

    echo implode(",", $numbers);
}

$numbers = [2, 1, 4, 3, 69, 35, 55, 7, 7, 2, 6, 203, 9];
cocktailShakeSort($numbers);

?>

Ссылки

https://gitlab.com/demensdeum/algorithms/-/blob/master/sortAlgorithms/cocktailShakerSort/cocktailShakerSort.php

Источники

https://www.youtube.com/watch?v=njClLBoEbfI
https://www.geeksforgeeks.org/cocktail-sort/
https://rosettacode.org/wiki/Sorting_algorithms/Cocktail_sort

…And Primus for All

In this note I will describe the launch of Steam games on the Linux distribution Arch Linux in the configuration of an Intel + Nvidia laptop

Counter-Strike: Global Offensive

The only configuration that worked for me is Primus-vk + Vulkan.

Install the required packages:
pacman -S vulkan-intel lib32-vulkan-intel nvidia-utils lib32-nvidia-utils vulkan-icd-loader lib32-vulkan-icd-loader primus_vk

Next, add launch options for Counter-Strike: Global Offensive:
pvkrun %command% -vulkan -console -fullscreen

Should work!

Sid Meier’s Civilization VI

Works in conjunction – Primus + OpenGL + LD_PRELOAD.

Install the Primus package:
pacman -S primus

Next, add launch options for Sid Meier’s Civilization VI:
LD_PRELOAD='/usr/lib/libfreetype.so.6:/usr/lib/libbrotlicommon.so.1:/usr/lib/libbrotlidec.so.1' primusrun %command%

LD_PRELOAD pushes the Freetype compression and font libraries.

Dota 2

Works in conjunction – Primus + OpenGL + removal of locks at startup.

Install the Primus package:
pacman -S primus

Next, add launch options for Dota 2:
primusrun %command% -gl -console

If the game doesn’t start with fcntl(5) for /tmp/source_engine_2808995433.lock failed, then try deleting the /tmp/source_engine_2808995433.lock file
rm /tmp/source_engine_2808995433.lock
Usually the lock file is left over from the last game session unless the game was closed naturally.

How to check?

The easiest way to check the launch of applications on a discrete Nvidia graphics card is through the nvidia-smi utility:

For games on the Source engine, you can check through the game console using the mat_info command:

References

https://wiki.archlinux.org/title/Steam/Game-specific_troubleshooting
https://help.steampowered.com/en/faqs/view/145A-FE54-F37B-278A
https://bbs.archlinux.org/viewtopic.php?id=277708

Sleep Sort

Sleep Sort – sleep sorting, another representative of deterministic strange sorting algorithms.

It works like this:

  1. Loops through a list of elements
  2. A separate thread is started for each cycle
  3. The thread schedules a sleep for the time of the element value and outputs the value after the sleep
  4. At the end of the cycle, we wait for the thread’s longest sleep to complete, and output the sorted list

Example code for sleep sort algorithm in C:

#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>

typedef struct {
    int number;
} ThreadPayload;

void *sortNumber(void *args) {
    ThreadPayload *payload = (ThreadPayload*) args;
    const int number = payload->number;
    free(payload);
    usleep(number * 1000);
    printf("%d ", number);
    return NULL;
}

int main(int argc, char *argv[]) {
    const int numbers[] = {2, 42, 1, 87, 7, 9, 5, 35};
    const int length = sizeof(numbers) / sizeof(int);

    int maximal = 0;
    pthread_t maximalThreadID;

    printf("Sorting: ");
    for (int i = 0; i < length; i++) { pthread_t threadID; int number = numbers[i]; printf("%d ", number); ThreadPayload *payload = malloc(sizeof(ThreadPayload)); payload->number = number;
        pthread_create(&threadID, NULL, sortNumber, (void *) payload);
        if (maximal < number) {
            maximal = number;
            maximalThreadID = threadID;
        }
    }
    printf("\n");
    printf("Sorted: ");
    pthread_join(maximalThreadID, NULL);
    printf("\n");
    return 0;
}

In this implementation I used the usleep function on microseconds with the value multiplied by 1000, i.e. on milliseconds.
The time complexity of the algorithm is O(very long)

Links

https://gitlab.com/demensdeum /algorithms/-/tree/master/sortAlgorithms/sleepSort

Sources

https://codoholicconfessions.wordpress. com/2017/05/21/strangest-sorting-algorithms/
https://twitter.com/javascriptdaily/status/856267407106682880?lang=en
https://stackoverflow.com/questions/6474318/what-is-the-time-complexity-of-the-sleep-sort

Stalin Sort

Stalin Sort – sorting through, one of the algorithms of sorting with data loss.
The algorithm is very productive and efficient, time complexity is O(n).

It works like this:

  1. We loop through the array, comparing the current element with the next one
  2. If the next element is less than the current one, then delete it
  3. As a result, we get a sorted array in O(n)

Example of the algorithm output:

Gulag: [1, 3, 2, 4, 6, 42, 4, 8, 5, 0, 35, 10]
Element 2 sent to Gulag
Element 4 sent to Gulag
Element 8 sent to Gulag
Element 5 sent to Gulag
Element 0 sent to Gulag
Element 35 sent to Gulag
Element 10 sent to Gulag
Numbers: [1, 3, 4, 6, 42]
Gulag: [2, 4, 8, 5, 0, 35, 10]

Python 3 code:

gulag = []

print(f"Numbers: {numbers}")
print(f"Gulag: {numbers}")

i = 0
maximal = numbers[0]

while i < len(numbers):
    element = numbers[i]
    if maximal > element:
        print(f"Element {element} sent to Gulag")
        gulag.append(element)
        del numbers[i]
    else:
        maximal = element        
        i += 1

print(f"Numbers: {numbers}")
print(f"Gulag: {gulag}")

Among the disadvantages, one can note the loss of data, but if we move towards a utopian, ideal, sorted list in O(n), then how else?

Links

https://gitlab.com/demensdeum /algorithms/-/tree/master/sortAlgorithms/stalinSort

Sources

https://github.com/gustavo-depaula/stalin-sort
https://www.youtube.com/shorts/juRL-Xn-E00
https://www.youtube.com/watch?v=L78i2YcyYfk

Selection Sort

Selection Sort – selection sorting algorithm. Selection of what? The minimum number!!!
The time complexity of the algorithm is O(n2)

The algorithm works as follows:

  1. We go through the array in a loop from left to right, remember the current starting index and the number by index, we will call the number A
  2. Inside the loop, we start another one to go from left to right in search of something smaller than A
  3. When we find the smaller one, we remember the index, now the smaller one becomes the number A
  4. When the inner loop ends, swap the number at the starting index with the number A
  5. After the full pass of the upper loop, we get a sorted array

Example of algorithm execution:

(29, 49, 66, 35, 7, 12, 80)
29 > 7
(7, 49, 66, 35, 29, 12, 80)
Round 1 ENDED
Round 2
(7, 49, 66, 35, 29, 12, 80)
49 > 35
35 > 29
29 > 12
(7, 12, 66, 35, 29, 49, 80)
Round 2 ENDED
Round 3
(7, 12, 66, 35, 29, 49, 80)
66 > 35
35 > 29
(7, 12, 29, 35, 66, 49, 80)
Round 3 ENDED
Round 4
(7, 12, 29, 35, 66, 49, 80)
(7, 12, 29, 35, 66, 49, 80)
Round 4 ENDED
Round 5
(7, 12, 29, 35, 66, 49, 80)
66 > 49
(7, 12, 29, 35, 49, 66, 80)
Round 5 ENDED
Round 6
(7, 12, 29, 35, 49, 66, 80)
(7, 12, 29, 35, 49, 66, 80)
Round 6 ENDED
Sorted: (7, 12, 29, 35, 49, 66, 80)

Having failed to find an Objective-C implementation on Rosetta Code, I wrote it myself:

#include <Foundation/Foundation.h>

@implementation SelectionSort
- (void)performSort:(NSMutableArray *)numbers
{
   NSLog(@"%@", numbers);   
   for (int startIndex = 0; startIndex < numbers.count-1; startIndex++) {
      int minimalNumberIndex = startIndex;
      for (int i = startIndex + 1; i < numbers.count; i++) {
         id lhs = [numbers objectAtIndex: minimalNumberIndex];
         id rhs = [numbers objectAtIndex: i];
         if ([lhs isGreaterThan: rhs]) {
            minimalNumberIndex = i;
         }
      }
      id temporary = [numbers objectAtIndex: minimalNumberIndex];
      [numbers setObject: [numbers objectAtIndex: startIndex] 
               atIndexedSubscript: minimalNumberIndex];
      [numbers setObject: temporary
               atIndexedSubscript: startIndex];
   }
   NSLog(@"%@", numbers);
}

@end 

Собрать и запустить можно либо на MacOS/Xcode, либо на любой операционной системе поддерживающей GNUstep, например у меня собирается Clang на Arch Linux.
Скрипт сборки:

        main.m \
        -lobjc \
        `gnustep-config --objc-flags` \
        `gnustep-config --objc-libs` \
        -I /usr/include/GNUstepBase \
        -I /usr/lib/gcc/x86_64-pc-linux-gnu/12.1.0/include/ \
        -lgnustep-base \
        -o SelectionSort \

Links

https://gitlab.com/demensdeum/algorithms/-/tree/master/sortAlgorithms/selectionSort

Sources

https://rosettacode.org/wiki/Sorting_algorithms/Selection_sort
https://ru.wikipedia.org/wiki/Сортировка_выбором
https://en.wikipedia.org/wiki/Selection_sort
https://www.youtube.com/watch?v=LJ7GYbX7qpM

Counting Sort

Counting sort – the algorithm of sorting by counting. What do you mean? Yes! Just like that!

The algorithm involves at least two arrays, the first is a list of integers to be sorted, the second is an array of size = (maximum number – minimum number) + 1, initially containing only zeros. Then the numbers from the first array are sorted, the index in the second array is obtained by the number element, which is incremented by one. After going through the entire list, we get a completely filled second array with the number of repetitions of numbers from the first. The algorithm has a serious overhead – the second array also contains zeros for numbers that are not in the first list, the so-called memory overhead.

After receiving the second array, we iterate over it and write the sorted version of the number by index, decrementing the counter to zero. The initially zero counter is ignored.

An example of unoptimized operation of the counting sort algorithm:

  1. Input array 1,9,1,4,6,4,4
  2. Then the array to count will be 0,1,2,3,4,5,6,7,8,9 (minimum number 0, maximum 9)
  3. With final counters 0,2,0,0,3,0,1,0,0,1
  4. Total sorted array 1,1,4,4,4,6,9

Algorithm code in Python 3:


numbers = [42, 89, 69, 777, 22, 35, 42, 69, 42, 90, 777]

minimal = min(numbers)
maximal = max(numbers)
countListRange = maximal - minimal
countListRange += 1
countList = [0] * countListRange

print(numbers)
print(f"Minimal number: {minimal}")
print(f"Maximal number: {maximal}")
print(f"Count list size: {countListRange}")

for number in numbers:
    index = number - minimal
    countList[index] += 1

replacingIndex = 0
for index, count in enumerate(countList):
    for i in range(count):
        outputNumber = minimal + index
        numbers[replacingIndex] = outputNumber
        replacingIndex += 1

print(numbers)

Из-за использования двух массивов, временная сложность алгоритма O(n + k)

Ссылки

https://gitlab.com/demensdeum/algorithms/-/tree/master/sortAlgorithms/countingSort

Источники

https://www.youtube.com/watch?v=6dk_csyWif0
https://www.youtube.com/watch?v=OKd534EWcdk
https://en.wikipedia.org/wiki/Counting_sort
https://rosettacode.org/wiki/Sorting_algorithms/Counting_sort
https://pro-prof.com/forums/topic/%D0%B0%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC-%D1%81%D0%BE%D1%80%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%BA%D0%B8-%D0%BF%D0%BE%D0%B4%D1%81%D1%87%D0%B5%D1%82%D0%BE%D0%BC

Bogosort

Pseudo-sort or swamp sort, one of the most useless sorting algorithms.

It works like this:
1. An array of numbers is fed to the input
2. An array of numbers is shuffled randomly
3. Check if the array is sorted
4. If not sorted, the array is shuffled again
5. This whole process is repeated until the array is sorted randomly.

As you can see, the performance of this algorithm is terrible, smart people think that even O(n * n!), i.e. there is a chance to get stuck throwing dice for the glory of the god of chaos for many years, the array will still not be sorted, or maybe it will be sorted?

Implementation

To implement it in TypeScript, I needed to implement the following functions:
1. Shuffling an array of objects
2. Comparison of arrays
3. Generate a random number in the range from zero to a number (sic!)
4. Print progress, because it seems like sorting is going on forever

Below is the implementation code in TypeScript:

const randomInteger = (maximal: number) => Math.floor(Math.random() * maximal);
const isEqual = (lhs: any[], rhs: any[]) => lhs.every((val, index) => val === rhs[index]);
const shuffle = (array: any[]) => {
    for (var i = 0; i < array.length; i++) { var destination = randomInteger(array.length-1); var temp = array[i]; array[i] = array[destination]; array[destination] = temp; } } let numbers: number[] = Array.from({length: 10}, ()=>randomInteger(10));
const originalNumbers = [...numbers];
const sortedNumbers = [...numbers].sort();

let numberOfRuns = 1;

do {
    if (numberOfRuns % 1000 == 0) {
        printoutProcess(originalNumbers, numbers, numberOfRuns);
    }
    shuffle(numbers);
    numberOfRuns++;
} while (isEqual(numbers, sortedNumbers) == false)

console.log(`Success!`);
console.log(`Run number: ${numberOfRuns}`)
console.log(`Original numbers: ${originalNumbers}`);
console.log(`Current numbers: ${originalNumbers}`);
console.log(`Sorted numbers: ${sortedNumbers}`);

Для отладки можно использовать VSCode и плагин TypeScript Debugger от kakumei.

Как долго

Вывод работы алгоритма:

src/bogosort.ts:1
Still trying to sort: 5,4,8,7,5,0,2,9,7,2, current shuffle 8,7,0,2,4,7,2,5,9,5, try number: 145000
src/bogosort.ts:2
Still trying to sort: 5,4,8,7,5,0,2,9,7,2, current shuffle 7,5,2,4,9,8,0,5,2,7, try number: 146000
src/bogosort.ts:2
Still trying to sort: 5,4,8,7,5,0,2,9,7,2, current shuffle 0,2,7,4,9,5,7,5,8,2, try number: 147000
src/bogosort.ts:2
Still trying to sort: 5,4,8,7,5,0,2,9,7,2, current shuffle 5,9,7,8,5,4,2,7,0,2, try number: 148000
src/bogosort.ts:2
Success!
src/bogosort.ts:24
Run number: 148798
src/bogosort.ts:25
Original numbers: 5,4,8,7,5,0,2,9,7,2
src/bogosort.ts:26
Current numbers: 5,4,8,7,5,0,2,9,7,2
src/bogosort.ts:27
Sorted numbers: 0,2,2,4,5,5,7,7,8,9

Для массива из 10 чисел Богосорт перемешивал исходный массив 148798 раз, многовато да?
Алгоритм можно использовать как учебный, для понимания возможностей языка с которым предстоит работать на рынке. Лично я был удивлен узнав что в ванильных JS и TS до сих пор нет своего алгоритма перемешивания массивов, генерации целого числа в диапазоне, доступа к хэшам объектов для быстрого сравнения.

Ссылки

https://gitlab.com/demensdeum/algorithms/-/tree/master/sortAlgorithms/bogosort
https://www.typescriptlang.org/
https://marketplace.visualstudio.com/items?itemName=kakumei.ts-debug

Источники

https://www.youtube.com/watch?v=r2N3scbd_jg
https://en.wikipedia.org/wiki/Bogosort