Antigravity

In a couple of days, with the help of Antigravity, I transferred the Masonry-AR backend from PHP + MySQL to Node.js + MongoDB + Redis -> Docker. The capabilities of AI are truly amazing, I remember how in 2022 I wrote the simplest shaders on shadertoy.com via ChatGPT and it seemed that this toy couldn’t do anything higher.
https://www.shadertoy.com/view/cs2SWm

Four years later, I watch how, in ~10 prompts, I effortlessly transferred my project from one back platform to another, adding containerization.
https://mediumdemens.vps.webdock.cloud/masonry-ar/

Cool, really cool.

DemensDeum Coding Challenge #2

I’m starting Demensdeum Coding Challenge #2:
1. You need to vibecode the web application to display a list of parties/events in the user’s area.
2. The data source can be web scraping from the front, or a local/remote database.
3. Show events/parties on the map only for today.
4. You can change the search radius.
5. Submit as a sequence of text prompts that can be reproduced in free code generators, such as Google AI Studio.
6. Should work on the web for iOS, Android, PC
7. Best design wins
8. Display detailed information about the event by tapping on the event on the map.
9. Zoom maps with your fingers or mouse.
10. The winner is chosen by the jury (write to me to participate in the jury)
11. Prize 200 USDT
12. Due date: July 1.

Winner of the past DemensDeum Coding Challenge #1
https://demensdeum.com/blog/ru/2025/06/03/demensdeum-code-challenge-1-winner/

Masonry-AR Update

The ability to buy coins for cryptocurrency has been added to the Masonry-AR game! For $1 you can get 5000 MOS. Referral links have also been added to the game; for every friend’s purchase, the referrer receives 50,000 MOS. Details in the Masonic Wiki. A self-walking mode has also been added: when there is no access to the GPS module, the Mason begins to walk from one of the capitals of the world automatically, only forward.

Game link:
https://demensdeum.com/demos/masonry-ar/client/

Donkey Adept

“Donkey Adept” is a stunning, electrifying piece of pixelated surrealism. In the center is a figure in a black leather jacket, whose head is a flaming, static-ridden television with fiery donkey ears. The subject holds a powerful lantern, acting as a lone sentinel who seeks the truth amidst the noise. It’s a furious retro-style meditation on media, madness and the relentless search for light.

https://opensea.io/item/ethereum/0x008d50b3b9af49154d6387ac748855a3c62bf40d/5

Entropy in programming

[Complements]

Entropy in programming is a powerful, but often inconspicuous force, which determines the variability and unpredictability of software behavior. From simple bugs to complex grandflies, entropy is the reason that our programs do not always behave as we expect.

What is entropy in software?

Entropy in software is a measure of unexpected outcomes of algorithms. The user perceives 1stti outcomes as errors or bugs, but from the point of view of the machine, the algorithm performs exactly the instructions that the programmer laid down in it. Unexpected behavior arises due to a huge number of possible combinations of input data, system conditions and interactions.

Causes of entropy:

* Changing the state: When the object can change its internal data, the result of its work becomes dependent on the entire history of its use.

* The complexity of the algorithms: as the program grows, the number of possible ways to execute the code grows exponentially, which makes the prediction of all outcomes almost impossible.

* External factors: operating system, other programs, network delays – all this can affect the execution of your code, creating additional sources of variability.

Causes of entropy:

* Changing the state: When the object can change its internal data, the result of its work becomes dependent on the entire history of its use.

* The complexity of the algorithms: as the program grows, the number of possible ways to execute the code grows exponentially, which makes the prediction of all outcomes almost impossible.

* External factors: operating system, other programs, network delays – all this can affect the execution of your code, creating additional sources of variability.

Global variables as a source of entropy

In his work “Global Varia Bybles Consedered Harmful” (1973) W.A. Wulf and M. Shaw showed that global variables are one of the main sources of unpredictable behavior. They create implicit addictions and side effects that are difficult to track and control, which is a classic manifestation of entropy.

Laws of Leman and Entropy

The idea of growing complexity of software systems perfectly formulated Manny Leman in his laws of evolution of software. Two of them directly reflect the concept of entropy:

The computer program used will be modified. This statement suggests that the software is not static. It lives, develops and changes to meet new requirements and environment. Each new “round” of the life of the program is a potential source of entropy.

When the computer program is modified, its complexity increases, provided that no one prevents this. This law is a direct consequence of entropy. Without targeted complexity management efforts, each new modification introduces additional variability and unpredictability into the system. There are new dependencies, conditions and side effects that increase the likelihood of bugs and unobvious behavior.

Entropy in the world of AI and LLM: unpredictable code

In the field of artificial intelligence and large language models (LLM), entropy is especially acute, since here we are dealing with non -metnamic algorithms. Unlike traditional programs, where the same access always gives the same way out, LLM can give out different answers to the same request.

This creates a huge problem: the correctness of the algorithm can be confirmed only on a certain, limited set of input data using authors. But when working with unknown input data (requests from users), the behavior of the model becomes unpredictable.

Examples of entropy in LLM

Innordative vocabulary and racist statements: known cases when chat bots, such as Tay from Microsoft or GroK from XII, after training on data from the Internet, began to generate offensive or racist statements. This was the result of entropy: unknown input data in combination with a huge volume of training sample led to unpredictable and incorrect behavior.

Illegal appeals: such problems arise when a neural network begins to issue content that violates copyright or ethical norms.

AI Bota in games: the introduction of AI characters in games with the possibility of learning, for example, in Fortnite, led to the fact that AI Bot had to be turned off and added to tracking for the correctness of activity, to prevent illegal actions from the LLM bot.

Technical debt: accumulated interest on defects

Poorly written code and bypass solutions
Technical duty is a conscious or unconscious compromise, in which priority is given to rapid delivery to the detriment of long -term support and quality. Fast corrections and undocumented bypass solutions, often implemented in a short time, accumulate, forming a “minefield”. This makes the code base extremely sensitive even to minor changes, since it becomes difficult to distinguish intentional bypass solutions from actual erroneous logic, which leads to unexpected regression and an increase in the number of errors.

This demonstrates the direct, cumulative effect of technical duty on the spread of errors and the integrity of algorithms, where each current reduction adopted leads to more complex and frequent errors in the future.

Inadequate testing and its cumulative effect

When the software systems are not tested carefully, they are much more susceptible to errors and unexpected behavior. This inadequacy allows errors to accumulate over time, creating a system that is difficult to support and which is very susceptible to further errors. Neglecting testing from the very beginning not only increases technical debt, but also directly helps to increase the number of errors. The “theory of broken windows” in software entropy suggests that insignificant, ignored errors or design problems can accumulate over time and lead to more serious problems and reduce software quality.

This establishes a direct causal relationship: the lack of testing leads to accumulation of errors, which leads to an increase in entropy, which leads to more complex and frequent errors, directly affecting the correctness and reliability of algorithms.

Lack of documentation and information silos

Proper documentation is often ignored when developing software, which leads to fragmentation or loss of knowledge on how the system works and how to support it. This forces the developers to “back” the system for making changes, significantly increasing the likelihood of misunderstanding and incorrect modifications, which directly leads to errors. It also seriously complicates the adaptation of new developers, since critical information is not available or misleading.

Program entropy occurs due to “lack of knowledge” and “discrepancies between general assumptions and the actual behavior of the existing system.” This is a deeper organizational observation: entropy manifests itself not only at the code level, but also at the level of knowledge. These informal, implicit knowledge is fragile and are easily lost (for example, when leaving team members), which directly leads to errors when trying to modify, especially new members of the team, thereby jeopardizing the integrity of algorithmic logic, since its main assumptions cease to be clear.

inconsistent development methods and loss of ownership

The human factor is a significant, often underestimated, driving factor in software entropy. Various skills, coding and quality expectations among developers lead to inconsistencies and deviations in the source code. The lack of standardized processes for linting, code reviews, testing and documentation exacerbates this problem. In addition, an unclear or unstable code of the code, when several commands own part of the code or no one owns, leads to neglect and increase in decay, which leads to duplication of components that perform the same function in different ways, spreading errors.

This shows that entropy is not only a technical problem, but also a sociotechnical, deeply rooted in organizational dynamics and human behavior. “Collective inconsistency” arising due to inconsistent practices and fragmented possession directly leads to inconsistencies and defects, making the system unpredictable and difficult to control, which greatly affects the integrity of the algorithms.

Cascading malfunctions in interconnected systems

Modern software systems are often complex and very interconnected. In such systems, a high degree of complexity and closely related components increase the likelihood of cascading failures, when the refusal of one component causes a chain reaction of failures in others. This phenomenon exacerbates the influence of errors and improper behavior of algorithms, turning localized problems into systemic risks. The results of the algorithms in such systems become very vulnerable to failures that arise far from their direct path of execution, which leads to widespread incorrect results.

Architectural complexity, direct manifestation of entropy, can turn isolated algorithmic errors into large -scale system failures, making the general system unreliable, and its output data is unreliable. This emphasizes the need for architectural stability to contain the spread of entropy effects.

One of the latest examples is the well -known stopping of the airports in America and Europe due to the appearance of the blue death screen after updating antivirus software in 2024, the erroneous outcome of the antivirus algorithm and the operating system led to the air traffic in the world.

Practical examples

Example 1: Entropy in Unicode and byte restriction

Let’s look at a simple example with a text field, which is limited by 32 bytes.

Scenario with ASCII (low entropy)

If the field accepts only ASCII symbols, each symbol takes 1 bytes. Thus, exactly 32 characters are placed in the field. Any other symbol simply will not be accepted.

@startuml
Title Example with ASCII (low entropy)
Actor User
Participant “Textfield”

User -> TextField: introduces 32 symbols ASCII
TextField -> TextField: checks the length (32 bytes)
Note Right
Everything is fine.
End Note
TextField -> User: Acceps input
@enduml

Scenario with UTF-8 (high entropy):

Now our program of their 80s falls in 2025. When the field takes UTF-8, each symbol can occupy from 1 to 4 bytes. If the user introduces a line exceeding 32 bytes, the system can cut it incorrectly. For example, emoji occupies 4 bytes. If pruning occurs inside the symbol, then we get a “broken” symbol.

@startuml
Title Example with UTF-8 (high entropy)
Actor User
Participant “Textfield”

User -> TextField: introduces “Hi” (37 byte)
TextField -> TextField: Cuts the line up to 32 bytes
Note Right
Suddenly! Symbol
Cut by bytes.
End Note
TextField -> User: displays “Hi”
Note Left
Incorrect symbol.
End Note
@enduml

Here the entropy is manifested in the fact that the same pruning operation for different input data leads to unpredictable and incorrect results.

Example 2: Entropy in CSS and incompatibility of browsers

Even in seemingly stable technologies, like CSS, entropy can occur due to different interpretations of standards.

Imagine that the developer has applied user-Elect: None; To all elements to turn off the text output.

Browser 10 (old logic)

Browser 10 makes an exception for input fields. Thus, despite the flag, the user can enter data.

@startuml
Title browser 10
Actor User
Participant “Browser 10” As Browser10

User -> Browser10: Entering in INPUT
Browser10 -> Browser10: Checks CSS
Note Right
-user-Elect: None;
Ignored for Input
End Note
BROWSER10 -> User: Allows the Entering
@enduml

Browser 11 (New Logic)

The developers of the new browser decided to strictly follow the specifications, applying the rule to all elements without exception.

@startuml
Title browser 11
Actor User
Participant “Browser 11” As Browser11

User -> Browser11: Entering Input
Browser11 -> Browser11: checks CSS
Note Right
-user-Elect: None;
Applied to all elements, including Input
End Note
Browser11 -> User: Refuses to enter
Note Left
The user cannot do anything
type.
End Note
@enduml

This classic example of entropy – the same rule leads to different results depending on the “system” (version of the browser).

Example 3: Entropy due to an ambiguous TK

An ambiguous technical task (TK) is another powerful source of entropy. When two developers, Bob and Alice, understand the same requirement in different ways, this leads to incompatible implementations.

TK: “To implement a generator of Fibonacci numbers. For optimization, a list of generated numbers must be cocked inside the generator.”

Bob’s mental model (OOP with a variable condition)
Bob focused on the phrase “List … must be cocked.” He implemented a class that stores the same state (self.sequence) and increases it with every call.

    def __init__(self):
        self.sequence = [0, 1]

    def generate(self, n):
        if n <= len(self.sequence):
            return self.sequence

        while len(self.sequence) < n:
            next_num = self.sequence[-1] + self.sequence[-2]
            self.sequence.append(next_num)

        return self.sequence

Alice's mental model (functional approach)

Alice focused on the phrase "returns the sequence." She wrote a pure function that returns a new list each time, using cache only as internal optimization.

    sequence = [0, 1]
    if n <= 2:
        return sequence[:n]

    while len(sequence) < n:
        next_num = sequence[-1] + sequence[-2]
        sequence.append(next_num)

    return sequence

When Alice begins to use the Bob generator, she expects Generate (5) will always return 5 numbers. But if before this Bob called Generate (8) at the same object, Alice will receive 8 numbers.

Bottom line: Entropy here is a consequence of mental mental models. The changeable state in the implementation of Bob makes the system unpredictable for Alice, which awaits the behavior of pure function.

Entropy and multi -setness: the condition of the race and grandfather

In multi -flowing programming, entropy is manifested especially. Several flows are performed simultaneously, and the procedure for their implementation is unpredictable. This can lead to the Race Condition, when the result depends on which stream is the first to access the common resource. The extreme case is grandfather when two or more streams are waiting for each other, and the program freezes.

Example of the solution of Dedlok:

The problem of Dedlok arises when two or more stream block each other, waiting for the release of the resource. The solution is to establish a single, fixed procedure for seizing resources, for example, block them by increasing ID. This excludes a cyclic expectation that prevents the deadlock.

@startuml
Title Solution: Unified blocking procedure
Participant "Stream 1" as Thread1
Participant "Stream 2" as Thread2
Participant "AS" as Accounta
Participant "Account B" AS Accountb

Thread1 -> Accounta: blocks account a
Note Over Thread1
The rule follows:
Block ID
End Note
Thread2 -> Accounta: Waiting for the account A will be freed
Note Over Thread2
The rule follows:
Waiting for locking a
End Note
Thread1 -> Accountb: blocks account b
Thread1 -> Accounta: frees account a
Thread1 -> Accountb: releases score b
Note Over Thread1
The transaction is completed
End Note
Thread2 -> Accounta: blocks the account a
Thread2 -> Accountb: blocks account b
Note Over Thread2
The transaction ends
End Note
@enduml

This approach - ordered blocking (Lock Ordering) - is a fundamental strategy for preventing deadlles in parallel programming.

Great, let's analyze how the changeable state in the OOP approach increases entropy, using the example of drawing on canvas, and compare this with a pure function.

Problem: Changed condition and entropy

When the object has a changed state, its behavior becomes unpredictable. The result of calling the same method depends not only on its arguments, but also on the whole history of interaction with this object. This brings entropy into the system.

Consider the two approaches to the rectangle drawing on canvas: one in an oop-style with a variable condition, the other in a functional, with a pure function.

1. OOP approach: class with a variable state
Here we create a Cursor class, which stores its inner state, in this case, color. The DRAW method will draw a rectangle using this condition.

  constructor(initialColor) {
    // Внутреннее состояние объекта, которое может меняться
    this.color = initialColor;
  }

  // Метод для изменения состояния
  setColor(newColor) {
    this.color = newColor;
  }

  // Метод с побочным эффектом: он использует внутреннее состояние
  draw(ctx, rect) {
    ctx.fillStyle = this.color;
    ctx.fillRect(rect.x, rect.y, rect.width, rect.height);
  }
}

// Использование
const myCursor = new Cursor('red');
const rectA = { x: 10, y: 10, width: 50, height: 50 };
const rectB = { x: 70, y: 70, width: 50, height: 50 };

myCursor.draw(ctx, rectA); // Используется начальный цвет: red
myCursor.setColor('blue'); // Изменяем состояние курсора
myCursor.draw(ctx, rectB); // Используется новое состояние: blue

UML diagram of the OOP approach:

This diagram clearly shows that the call of the DRAW method gives different results, although its arguments may not change. This is due to a separate SetColor call, which has changed the internal state of the object. This is a classic manifestation of entropy in a changeable state.

title ООП-подход
actor "Программист" as Programmer
participant "Класс Cursor" as Cursor
participant "Canvas" as Canvas

Programmer -> Cursor: Создает new Cursor('red')
note left
  - Инициализирует состояние
    с цветом 'red'.
end note
Programmer -> Cursor: draw(ctx, rectA)
note right
  - Метод draw использует
    внутреннее состояние
    объекта (цвет).
end note
Cursor -> Canvas: Рисует 'red' прямоугольник
Programmer -> Cursor: setColor('blue')
note left
  - Изменяет внутреннее состояние!
  - Это побочный эффект.
end note
Programmer -> Cursor: draw(ctx, rectB)
note right
  - Тот же метод draw,
    но с другим результатом
    из-за измененного состояния.
end note
Cursor -> Canvas: Рисует 'blue' прямоугольник
@enduml

2. Functional approach: Pure function

Here we use a pure function. Its task is to simply draw a rectangle using all the necessary data that is transmitted to it. She has no condition, and her challenge will not affect anything outside her borders.

  // Функция принимает все необходимые данные как аргументы
  ctx.fillStyle = color;
  ctx.fillRect(rect.x, rect.y, rect.width, rect.height);
}

// Использование
const rectA = { x: 10, y: 10, width: 50, height: 50 };
const rectB = { x: 70, y: 70, width: 50, height: 50 };

drawRectangle(ctx, rectA, 'red'); // Рисуем первый прямоугольник
drawRectangle(ctx, rectB, 'blue'); // Рисуем второй прямоугольник

UML diagram of a functional approach:

This diagram shows that the Drawrectangle function always gets the outside color. Her behavior is completely dependent on the input parameters, which makes it clean and with a low level of entropy.

@startuml
Title Functional approach
Actor "Programmer" As Programmer
Participant "Function \ N Drawrectangle" as DRAWFUNC
Participant "Canvas" as canvas

Programmer -> DRAWFUNC: DRAWREWERCTANGLE (CTX, RECTA, 'RED')
Note Right
- Call with arguments:
- CTX
- RECTA (coordinates)
- 'Red' (color)
- The function has no condition.
End Note

DRAWFUNC -> Canvas: floods with the color 'Red'
Programmer -> DRAWFUNC: DRAWREWERCTANGLE (CTX, RECTB, 'Blue')
Note Right
- Call with new arguments:
- CTX
- RECTB (coordinates)
- 'Blue' (color)
End Note
DRAWFUNC -> Canvas: floods with the color 'Blue'
@enduml

In an example with a pure function, behavior is completely predictable, since the function has no condition. All information for work is transmitted through arguments, which makes it isolated and safe. In an OOP approach with a variable state to the behavior of the DRAW method, the whole history of interaction with the object can affect, which introduces entropy and makes the code less reliable.

Modular design and architecture: isolation, testability and re -use

The division of complex systems into smaller, independent, self -sufficient modules simplifies the design, development, testing and maintenance. Each module processes certain functionality and interacts through clearly defined interfaces, reducing interdependence and contributing to the separation of responsibility. This approach improves readability, simplifies maintenance, facilitates parallel development and simplifies testing and debugging by isolating problems. It is critical that this reduces the "radius of the defeat" of errors, holding back defects in separate modules and preventing cascading failures. Microservice architecture is a powerful realization of modality.

Modularity is not just a way of organizing code, but also a fundamental approach to containing defects and increasing stability. Limiting the impact of the error in one module, modality increases the overall stability of the system to entropy decay, guaranteeing that one point of refusal does not compromise the correctness of the entire application. This allows the teams to focus on smaller, more controlled parts of the system, which leads to more thorough testing and faster detecting and correcting errors.

Practices of pure code: kiss, drys and Solid principles for reliability

Kiss (Keep it Simple, Stupid):
This design philosophy stands for simplicity and clarity, actively avoiding unnecessary complexity. A simple code is inherently easier to read, understand and modify, which directly leads to a decrease in a tendency to errors and improve support. The complexity is clearly defined as a nutrient environment for errors.

KISS is not just an aesthetic preference, but a deliberate choice of design, which reduces the surface of the attack for errors and makes the code more resistant to future changes, thereby maintaining the correctness and predictability of algorithms. This is a proactive measure against entropy at a detailed level of code.

Dry (Donat Repeat Yourself):
The DRY principle is aimed at reducing the repetition of information and duplication of code, replacing it with abstractions or using data normalization. Its main position is that "each fragment of knowledge should have a single, unambiguous, authoritative representation in the system." This approach eliminates redundancy, which, in turn, reduces inconsistencies and prevents the spread of errors or their inconsistent correction in several copies of duplicated logic. It also simplifies the support and debugging of the code base.

Duplication of code leads to inconsistent changes, which, in turn, leads to errors. Dry prevents this, providing a single source of truth for logic and data, which directly contributes to the correctness of algorithms, guaranteeing that the general logic behaves uniformly and predictably throughout the system, preventing thin, difficult to enter errors.

Solid principles

This mnemonic acronym presents five fundamental design principles (unified responsibility, openness/closeness, substitution of Liskin, separation of interfaces, inversions of dependencies) that are crucial for creating object-oriented projects that are clear, flexible and supportive. Adhering to Solid, software entities become easier to support and adapt, which leads to a smaller number of errors and faster development cycles. They achieve this by simplifying service (SRP), ensuring the scalable adding functions without modification (OCP), ensuring behavioral consistency (LSP), minimizing coherence (ISP) and increasing flexibility due to abstraction (DIP).

Solid principles provide a holistic approach to structural integrity, which makes the system in essence more resistant to stunt effects of changes. Promoting modularity, separation and clear responsibilities, they prevent cascading errors and retain the correctness of algorithms even as the system is continuously evolution, acting as fundamental measures to combat entropy.

Entropy and Domain-Driven Design (DDD)

Domain-Driven Design (DDD) is not just a philosophy, but a full-fledged methodology that offers specific patterns for breaking the application into domains, which allows you to effectively control complexity and fight entropy. DDD helps to turn a chaotic system into a set of predictable, isolated components.

Patterns of Gang of Four design as a single conceptual apparatus

The book "Design Patterns: Elements of Reusable Object-Oriented Software" (1994), written by a "gang of four" (GOF), offered a set of proven solutions for typical problems. These patterns are excellent tools for combating entropy, as they create structured, predictable and controlled systems.

One of the key effects of patterns is the creation of a single conceptual apparatus. When the developer in one team talks about the "factory" or "loner", his colleagues immediately understand what kind of code is we talking about. This significantly reduces entropy in communication, because:

The ambiguity decreases: the patterns have clear names and descriptions, which excludes different interpretations, as in the example with Bob and Alice.

Oncuting accelerates: the new team members are poured faster into the project, since they do not need to guess the logic standing behind complex structures.

Refactoring is facilitated: if you need to change the part of the system built according to the pattern, the developer already knows how it is arranged and which parts can be safely modified.

Examples of GOF patterns and their influence on entropy:

Pattern "Strategy": allows you to encapsulate various algorithms in individual classes and make them interchangeable. This reduces entropy, as it allows you to change the behavior of the system without changing its main code.

Pattern "Command" (Command): Inkapsules the method of the method to the object. This allows you to postpone execution, put the commands in the queue or cancel them. Pattern reduces entropy, as it separates the sender of the team from its recipient, making them independent.

Observer Pattern (Observer): Determines the dependence of the "one-to-many", in which a change in the state of one object automatically notifies all dependent on it. This helps to control side effects, making them obvious and predictable, and not chaotic and hidden.

Pattern "Factory Method": defines the interface for creating objects, but allows subclasses to decide which class to institute. This reduces entropy, as it allows you to flexibly create objects without the need to know specific classes, reducing connectedness.

These patterns help programmers create more predictable, tested and controlled systems, thereby reducing entropy, which inevitably occurs in complex projects.

DDD key patterns for controlling entropy

Limited contexts: This pattern is the DDD foundation. It offers to divide a large system into small, autonomous parts. Each context has its own model, a dictionary of terms (Ubiquitous Language) and logic. This creates strict boundaries that prevent the spread of changes and side effects. Change in one limited context, for example, in the "context of orders", will not affect the "delivery context".

Aggregates (Aggregates): The unit is a cluster of related objects (for example, "order", "lines of the order"), which is considered as a whole. The unit has one root object (Aggregate Root), which is the only point of entry for all changes. This provides consistency and guarantees that the state of the unit always remains integral. By changing the unit only through its root object, we control how and when there is a change in the condition, which significantly reduces entropy.

Domain Services services: For operations that do not belong to any particular object of the subject area (for example, the transfer of money between accounts), DDD proposes to use domain services. They coordinate the actions between several units or objects, but do not keep the condition themselves. This makes the logic more transparent and predictable.

The events of the subject area (Domain Events): Instead of direct calling methods from different contexts, DDD offers to use events. When something important happens in one context, he "publishes" the event. Other contexts can subscribe to this event and respond to it. This creates a faint connectedness between the components, which makes the system a more scalable and resistant to changes.

DDD helps control entropy, creating clear boundaries, strict rules and isolated components. This turns a complex, confusing system into a set of independent, controlled parts, each of which has its own “law” and predictable behavior.

Complex and lively documentation

Maintaining detailed and relevant documentation on code changes, design solutions, architectural diagrams and user manuals is of paramount importance. This “live documentation” helps developers understand the intricacies of the system, track changes and correctly make future modifications or correct errors. It significantly reduces the time spent on the “re -opening” or the reverse design of the system, which are common sources of errors.

Program entropy occurs due to "lack of knowledge" and "discrepancies between general assumptions and the actual behavior of the existing system." The documentation acts not just as a guide, but as

The critical mechanism for preserving knowledge, which directly fights with the "entropy of knowledge." By making implicit knowledge explicitly and affordable, it reduces misunderstandings and the likelihood of making errors due to incorrect assumptions about the behavior of algorithms or system interactions, thereby protecting functional correctness.

strict testing and continuous quality assurance

Automated testing: modular, integration, system and regression testing
Automated testing is an indispensable tool for softening software entropy and preventing errors. It allows early detection of problems, guaranteeing that code changes do not violate the existing functionality, and provides quick, consistent feedback. Key types include modular tests (for isolated components), integration tests (for interactions between modules), system tests (for a full integrated system) and regression tests (to ensure that new changes do not lead to repeated appearance of old errors). Automated testing significantly reduces the human factor and increases reliability.

Automated testing is the main protection against the accumulation of hidden defects. It actively “shifts” the discovery of errors to the left in the development cycle, which means that problems are found when their correction is the cheapest and simplest, preventing their contribution to the effect of the snow coma of entropy. This directly affects the correctness of the algorithms, constantly checking the expected behavior at several levels of detail.

development through testing (TDD): shift to the left in the detection of errors

Development through testing (TDD) is a process of software development, which includes writing tests for code before writing the code itself. This iterative cycle "Red-Green-Refactoring" promotes quick feedback, allowing early detection of errors and significantly reducing the risk of complex problems at later stages of development. It was shown that TDD leads to a smaller number of errors and the optimal quality of the code, well coordinating the philosophy of Dry (Donat Repeat Yourself). Empirical studies of IBM and Microsoft show that TDD can reduce error density to release by impressive 40-90%. Test examples also serve as live documentation.

TDD acts as proactive quality control, built directly into the development process. Forcing the developers to determine the expected behavior before implementation, it minimizes the introduction of logical errors and guarantees that the code is created purposefully to comply with the requirements, directly improving the correctness and predictability of algorithms from the very beginning.

Continuous integration and delivery (CI/CD): Early feedback and stable releases
CI/CD practices are fundamental for modern software development, helping to identify errors in the early stages, accelerate development and ensure uninterrupted deployment process. Frequent integration of small code packages into the central repository allows early detection of errors and continuous improvement of code quality through automated assemblies and tests. This process provides quick feedback, allowing the developers to quickly and effectively eliminate problems, and also significantly increases the stability of the code, preventing the accumulation of unverified or unstable code.

CI/CD conveyors function as a continuous mechanism for reducing entropy. By automating integration and testing, they prevent the accumulation of integration problems, provide a constantly unfolding condition and provide immediate visibility of regression. This systematic and automated approach directly counteracts the disorder made by continuous changes, maintaining the stability of algorithms and preventing the spread of errors throughout the system.

Systematic management of technical debt

INCREATIONAL Refactoring: Strategic Code Improvement

Refactoring is the process of restructuring the existing code to improve its internal structure without changing its external behavior. This is a direct means of combating software rotting and reducing complexity. Although refactoring is usually considered a way to reduce the number of errors, it is important to admit that some refractoring can unintentively contribute new errors, which requires strict testing. However, studies generally confirm that the refractured code is less subject to errors than unscareuded. Increting refactoring, in which the debt management is integrated into the current development process, and is not postponed, is crucial to prevent the exponential accumulation of technical debt.

Refactoring is a deliberate action to reduce entropy, proactive code restructuring to make it more resistant to changes, thereby reducing the likelihood of future errors and improving the clarity of algorithms. It turns reactive extinguishing fires into proactive management of structural health.

Backlogs of technical debt: Prioritization and distribution of resources

The maintenance of a current Bablog of technical debt is a critical practice for systematic management and elimination of technical debt. This backlog serves as a comprehensive register of identified elements of technical duty and areas requiring improvement, guaranteeing that these problems will not be overlooked. It allows project managers to prioritize debt elements based on their seriousness of influence and potential risks. The integration of the Bablog during the project ensures that refactoring, error correction and code cleaning are regular parts of the project’s daily management, reducing the long -term repayment costs.

The baclog of technical debt turns an abstract, growing problem into a controlled, effective set of tasks. This systematic approach allows organizations to take reasonable compromises between the development of new functions and investments in quality, preventing inconspicuous debt accumulation, which can lead to critical errors or degradation of algorithm productivity. It provides visibility and control over key entropy power.

Static and dynamic code analysis: proactive identification of problems

Static analysis

This technique includes an analysis of the source code without its implementation to identify problems such as errors, code smells, safety vulnerability and impaired coding standards. It serves as the “first line of protection”, identifying problems in the early stages of the development cycle, improving the overall quality of the code and reducing technical debt by identifying problematic templates before they appear as errors during execution.

Static analysis acts as an automated "Code Quality Police". Identifying potential problems (including those that affect algorithmic logic) before performing, it prevents their manifestation in the form of errors or architectural disadvantages. This is a scalable method of ensuring coding standards and identifying common errors that contribute to software entropy.

Dynamic analysis

This method evaluates software behavior during execution, providing valuable information about problems that manifest only during execution. It excellently discovers errors during execution, such as memory leaks, the condition of the race and the exclusion of the zero pointer, as well as narrow places in performance and vulnerability of safety.

Dynamic analysis is critical for identifying behavioral disadvantages during execution, which cannot be detected by static analysis. The combination of static and dynamic analysis ensures a comprehensive idea of the structure and behavior of the code, allowing the teams to identify defects before they develop into serious problems.

Monitoring Production and Office of Incidents

APM (Application Performance Monitoring):
APM tools are designed to monitor and optimize applications performance. They help to identify and diagnose complex problems of performance, as well as detect the root causes of errors, thereby reducing loss of income from downtime and degradation. APM systems monitor various metrics, such as response time, use of resources and error frequency, providing real-time information, which allows you to proactively solve problems before they affect users.

APM tools are critical of proactive solutions to problems and maintaining service levels. They provide deep visibility in the production environment, allowing the teams to quickly identify and eliminate problems that can affect the correct algorithms or cause errors, thereby minimizing downtime and improving user experience.

Observability (logs, metrics, tracer):

The observability refers to the ability to analyze and measure the internal states of systems based on their output data and interactions between assets. Three main pillars of observability are metrics (quantitative data on productivity and use of resources), logs (detailed chronological records of events) and tracing (tracking the flow of requests through system components). Together they help to identify and solve problems, providing a comprehensive understanding of the behavior of the system. The observability goes beyond traditional monitoring, helping to understand "unknown unknown" and improving the time of trouble -free application of applications.

The observability allows the teams to flexibly investigate what is happening and quickly determine the root cause of the problems that they may not have foreseen. This provides a deeper, flexible and proactive understanding of the behavior of the system, allowing the teams to quickly identify and eliminate unforeseen problems and maintain high accessibility of applications.

Analysis of the root cause (RCA)

The analysis of the root causes (RCA) is a structured process based on data that reveals the fundamental causes of problems in systems or processes, allowing organizations to implement effective, long -term solutions, and not just eliminate symptoms. It includes the definition of the problem, collection and analysis of the relevant data (for example, metrics, logs, temporary scales), determination of causal and related factors using tools such as “5 why” and Ishikawa diagrams, as well as the development and implementation of corrective actions. RCA is crucial to prevent re -occurrence of problems and training on incidents.

RCA is crucial for the long -term prevention of problems and training on incidents. Systematically identifying and eliminating the main causes, and not only symptoms, organizations can prevent re -occurrence of errors and algorithms failures, thereby reducing the overall system of the system and increasing its reliability.

Flexible methodologies and team practices

Error management in Agile:

In the AGILE environment, error management is critically important, and it is recommended to allocate time in sprints to correct them. Errors should be recorded in a single product of the product and associated with the corresponding history to facilitate the analysis of root causes and improve the code in subsequent sprints. Teams should strive to correct errors as soon as possible, preferably in the current sprint in order to prevent their accumulation. The collection of error statistics (the number of resolved, the number of registered, hours spent on correction) helps to get an idea of code quality and improve processes.

This emphasizes the importance of immediate corrections, analysis of root causes and continuous improvement. Flexible methodologies provide a framework for proactive control of errors, preventing their contribution to the entropy of the system and maintaining the correctness of algorithms by constant verification and adaptation.

Devops practices

DevOPS practices help reduce software defects and improve quality through several key approaches. They include the development of a culture of cooperation and unmistakable communication, the adoption of continuous integration and delivery (CI/CD), the configuration of automated testing, focusing attention on observability and metrics, avoiding handmade work, including safety at the early stages of the development cycle and training on incidents. These practices reduce the number of errors, improve quality and contribute to constant improvement.

Devops contributes to continuous improvement and reduction of entropy through automation, quick feedback and a culture of general responsibility. Integrating the processes of development and operation, Devops creates an environment in which problems are detected and eliminated quickly, preventing their accumulation and degradation of systems, which directly supports the integrity of the algorithms.

Conclusion

Program entropy is an inevitable force that constantly strives for degradation of software systems, especially in the context of the correctness of algorithms and errors. This is not just physical aging, but a dynamic interaction between the code, its environment and human factors that constantly make a mess. The main driving forces of this decay include growing complexity, the accumulation of technical debt, inadequate documentation, constantly changing external environments and inconsistent development methods. These factors directly lead to incorrect results of the work of algorithms, the loss of predictability and an increase in the number of errors that can cascadely spread through interconnected systems.

The fight against software entropy requires a multifaceted, continuous and proactive approach. It is not enough just to correct errors as they occur; It is necessary to systematically eliminate the main reasons that generate them. The adoption of the principles of modular design, clean code (Kiss, Dry, Solid) and complex documentation is fundamental for creating stable systems, which are essentially less susceptible to entropy. Strict automated testing, development through testing (TDD) and continuous integration/delivery (CI/CD) act as critical mechanisms of early detection and prevention of defects, constantly checking and stabilizing the code base.

In addition, the systematic management of technical debt through incidental refactoring and bafflogists of technical debt, as well as the use of static and dynamic code analysis tools, allows organizations to actively identify and eliminate problem areas before they lead to critical failures. Finally, reliable production monitoring with the help of APM tools and observability platforms, in combination with a disciplined analysis of the root causes and flexible team practices, ensures rapid response to emerging problems and creates a continuous improvement cycle.

Ultimately, ensuring the integrity of algorithms and minimizing errors in conditions of software entropy - this is not a one -time effort, but a constant obligation to maintain order in a dynamic and constantly changing environment. Applying these strategies, organizations can significantly increase the reliability, predictability and durability of their software systems, guaranteeing that algorithms will function as planned, even as they evolution.

LLM Fine-Tune

Currently, all popular LLM service providers use Fine-Tune using JSONL files, which describe the inputs and outputs of the model, with small variations, for example for Gemini, Openai, the format is slightly different.

After downloading a specially formed JSONL file, the process of specialization of the LLM model on the specified dataset begins, for all current well -known LLM providers this service is paid.

For Fine-Tune on a local machine using Ollama, I recommend relying on a detailed video from the YouTube channel Tech with TIM-Easiest Way to fine-tune a llm and us it with alloma:
https://www.youtube.com/watch?v=pTaSDVz0gok

An example of a JUPYTER laptop with the preparation of JSONL Dataset from exports of all Telegram messages and launching the local Fine-Tune process is available here:
https://github.com/demensdeum/llm-train-example

React Native brief review

React Native has established itself as a powerful tool for cross-platform development of mobile and web applications. It allows you to create native applications for Android and iOS, as well as web applications using a single code base on JavaScript/Typescript.

Fundamentals of architecture and development

React National architecture is based on native bindings from JavaScript/Typescript. This means that the basic business logic and an application in the application are written on JavaScript or Typescript. When access to specific native functionality (for example, the device or GPS camera) is required, these native bindings are used, which allow you to call the code written on SWIFT/Objective-C for iOS or Java/Kotlin for Android.

It is important to note that the resulting platforms may vary in functionality. For example, a certain functionality can be available only for Android and iOS, but not for Web, or vice versa, depending on the native capabilities of the platform.

Configuration and updates
The configuration of native bindings is carried out through the Plugins key. For stable and safe development, it is critical to use the latest versions of React Native components and always turn to current documentation. This helps to avoid compatibility problems and use all the advantages of the latest updates.

Features of development and optimization

React Native can generate resulting projects for specific platforms (for example, Android and iOS folders). This allows the developers, if necessary, patch the files of resulting projects manually for fine optimization or specific settings, which is especially useful for complex applications that require an individual approach to performance.

For typical and simple applications, it is often enough to use EXPO Bandle with built -in native bindings. However, if the application has complex functionality or requires deep customization, it is recommended to use REACT NATIVE custom assemblies.

Eaturability of development and updates

One of the key advantages of React Native is Hot ReLoad support for Typescript/JavaScript code during development. This significantly accelerates the development process, since the code changes are instantly displayed in the application, allowing the developer to see the result in real time.

React Native also supports “Silent Update) bypassing the process of Google Play and Apple App Store, but this is only applicable to Typescript/JavaScript code. This allows you to quickly release errors or small functionality updates without the need to go through a full cycle of publication through applications stores.

It is important to understand that TS/JS Code is bandaged on a specific version of native dependencies using fingerprinting, which ensures the coordination between JavaScript/Typescript part and native part of the application.

Use of LLM in development

Although codhegeneration with LLM (Large Language Models) is possible, its suitability is not always high due to potentially outdated datasets on which the models were trained. This means that the generated code may not correspond to the latest versions of React Native or the best practices.

React Native continues to develop, offering developers a flexible and effective way to create cross -platform applications. It combines the speed of development with the possibility of access to native functions, making it an attractive choice for many projects.

Pixel Perfect: myth or reality in the era of declarativeness?

In the world of interfaces development, there is a common concept – “Pixel Perfect in the Lodge” . It implies the most accurate reproduction of the design machine to the smallest pixel. For a long time it was a gold standard, especially in the era of a classic web design. However, with the arrival of the declarative mile and the rapid growth of the variety of devices, the principle of “Pixel Perfect” is becoming more ephemeral. Let’s try to figure out why.

Imperial Wysiwyg vs. Declarative code: What is the difference?

Traditionally, many interfaces, especially desktop, were created using imperative approaches or Wysiwyg (What You See is What You Get) of editors. In such tools, the designer or developer directly manipulates with elements, placing them on canvas with an accuracy to the pixel. It is similar to working with a graphic editor – you see how your element looks, and you can definitely position it. In this case, the achievement of “Pixel Perfect” was a very real goal.

However, modern development is increasingly based on declarative miles . This means that you do not tell the computer to “put this button here”, but describe what you want to get. For example, instead of indicating the specific coordinates of the element, you describe its properties: “This button should be red, have 16px indentations from all sides and be in the center of the container.” Freimvorki like React, Vue, Swiftui or Jetpack Compose just use this principle.

Why “Pixel Perfect” does not work with a declarative mile for many devices

Imagine that you create an application that should look equally good on the iPhone 15 Pro Max, Samsung Galaxy Fold, iPad Pro and a 4K resolution. Each of these devices has different screen resolution, pixel density, parties and physical sizes.

When you use the declarative approach, the system itself decides how to display your described interface on a particular device, taking into account all its parameters. You set the rules and dependencies, not harsh coordinates.

* Adaptability and responsiveness: The main goal of the declarative miles is to create adaptive and responsive interfaces . This means that your interface should automatically adapt to the size and orientation of the screen without breaking and maintaining readability. If we sought to “Pixel Perfect” on each device, we would have to create countless options for the same interface, which will completely level the advantages of the declarative approach.
* Pixel density (DPI/PPI): The devices have different pixel density. The same element with the size of 100 “virtual” pixels on a device with high density will look much smaller than on a low -density device, if you do not take into account the scaling. Declarative frameworks are abstracted by physical pixels, working with logical units.
* Dynamic content: Content in modern applications is often dynamic – its volume and structure may vary. If we tattered hard to the pixels, any change in text or image would lead to the “collapse” of the layout.
* Various platforms: In addition to the variety of devices, there are different operating systems (iOS, Android, Web, Desktop). Each platform has its own design, standard controls and fonts. An attempt to make an absolutely identical, Pixel Perfect interface on all platforms would lead to an unnatural type and poor user experience.

The old approaches did not go away, but evolved

It is important to understand that the approach to interfaces is not a binary choice between “imperative” and “declarative”. Historically, for each platform there were its own tools and approaches to the creation of interfaces.

* Native interface files: for iOS it were XIB/Storyboards, for Android-XML marking files. These files are a Pixel-PERFECT WYSIWYG layout, which is then displayed in the radio as in the editor. This approach has not disappeared anywhere, it continues to develop, integrating with modern declarative frames. For example, Swiftui in Apple and Jetpack Compose in Android set off on the path of a purely declarative code, but at the same time retained the opportunity to use a classic layout.
* hybrid solutions: Often in real projects, a combination of approaches is used. For example, the basic structure of the application can be implemented declaratively, and for specific, requiring accurate positioning of elements, lower -level, imperative methods can be used or native components developed taking into account the specifics of the platform.

from monolith to adaptability: how the evolution of devices formed a declarative mile

The world of digital interfaces has undergone tremendous changes over the past decades. From stationary computers with fixed permits, we came to the era of exponential growth of the variety of user devices . Today, our applications should work equally well on:

* smartphones of all form factors and screen sizes.
* tablets with their unique orientation modes and a separated screen.
* laptops and desktops with various permits of monitors.
* TVs and media centers , controlled remotely. It is noteworthy that even for TVs, the remarks of which can be simple as Apple TV Remote with a minimum of buttons, or vice versa, overloaded with many functions, modern requirements for interfaces are such that the code should not require specific adaptation for these input features. The interface should work “as if by itself”, without an additional description of what “how” to interact with a specific remote control.
* smart watches and wearable devices with minimalistic screens.
* Virtual reality helmets (VR) , requiring a completely new approach to a spatial interface.
* Augmented reality devices (AR) , applying information on the real world.
* automobile information and entertainment systems .
* And even household appliances : from refrigerators with sensory screens and washing machines with interactive displays to smart ovens and systems of the Smart House.

Each of these devices has its own unique features: physical dimensions, parties ratio, pixel density, input methods (touch screen, mouse, controllers, gestures, vocal commands) and, importantly, the subtleties of the user environment . For example, a VR shlesh requires deep immersion, and a smartphone-fast and intuitive work on the go, while the refrigerator interface should be as simple and large for quick navigation.

Classic approach: The burden of supporting individual interfaces

In the era of the dominance of desktops and the first mobile devices, the usual business was the creation and support of of individual interface files or even a completely separate interface code for each platform .

* Development under iOS often required the use of Storyboards or XIB files in XCode, writing code on Objective-C or SWIFT.
* For Android the XML marking files and the code on Java or Kotlin were created.
* Web interfaces turned on HTML/CSS/JavaScript.
* For C ++ applications on various desktop platforms, their specific frameworks and tools were used:
* In Windows these were MFC (Microsoft Foundation Classes), Win32 API with manual drawing elements or using resource files for dialog windows and control elements.
* Cocoa (Objective-C/Swift) or the old Carbon API for direct control of the graphic interface were used in macos .
* In linux/unix-like systems , libraries like GTK+ or QT were often used, which provided their set of widgets and mechanisms for creating interfaces, often via XML-like marking files (for example, .ui files in Qt Designer) or direct software creation of elements.

This approach ensured maximum control over each platform, allowing you to take into account all its specific features and native elements. However, he had a huge drawback: duplication of efforts and tremendous costs for support . The slightest change in design or functionality required the introduction of a right to several, in fact, independent code bases. This turned into a real nightmare for developer teams, slowing down the output of new functions and increasing the likelihood of errors.

declarative miles: a single language for diversity

It was in response to this rapid complication that the declarative miles appeared as the dominant paradigm. Framws like react, vue, swiftui, jetpack compose and others are not just a new way of writing code, but a fundamental shift in thinking.

The main idea of the declarative approach : Instead of saying the system “how” to draw every element (imperative), we describe “what“ we want to see (declarative). We set the properties and condition of the interface, and the framework decides how to best display it on a particular device.

This became possible thanks to the following key advantages:

1. Abstraction from the details of the platform: declarative fraimvorki are specially designed to forget about low -level details of each platform. The developer describes the components and their relationships at a higher level of abstraction, using a single, transferred code.
2. Automatic adaptation and responsiveness: Freimvorki take responsibility for automatic scaling, changing the layout and adaptation of elements to different sizes of screens, pixel density and input methods. This is achieved through the use of flexible layout systems, such as Flexbox or Grid, and concepts similar to “logical pixels” or “DP”.
3. consistency of user experience: Despite the external differences, the declarative approach allows you to maintain a single logic of behavior and interaction throughout the family of devices. This simplifies the testing process and provides more predictable user experience.
4. Acceleration of development and cost reduction: with the same code capable of working on many platforms, significantly is reduced by the time and cost of development and support . Teams can focus on functionality and design, and not on repeated rewriting the same interface.
5. readiness for the future: the ability to abstract from the specifics of current devices makes the declarative code more more resistant to the emergence of new types of devices and form factors . Freimvorki can be updated to support new technologies, and your already written code will receive this support is relatively seamless.

Conclusion

The declarative mile is not just a fashion trend, but the necessary evolutionary step caused by the rapid development of user devices, including the sphere of the Internet of things (IoT) and smart household appliances. It allows developers and designers to create complex, adaptive and uniform interfaces, without drowning in endless specific implementations for each platform. The transition from imperative control over each pixel to the declarative description of the desired state is a recognition that in the world of the future interfaces should be flexible, transferred and intuitive regardless of which screen they are displayed.

Programmers, designers and users need to learn how to live in this new world. The extra details of the Pixel Perfect, designed to a particular device or resolution, lead to unnecessary time costs for development and support. Moreover, such harsh layouts may simply not work on devices with non-standard interfaces, such as limited input TVs, VR and AR shifts, as well as other devices of the future, which we still do not even know about today. Flexibility and adaptability – these are the keys to the creation of successful interfaces in the modern world.

Demens TV Heads Nft

I want to share my new project-the NFT collection “Demens TV Heads”.

This is a series of digital art work, they reflect people of different characters and professions, in the style of the Demensdeum logo.
The first work is Fierce “Grozny” this is a stylized self -portrait.

I plan to release only 12 NFT, one each every month.

Each work exists not only in the Ethereum blockchain, but is also available on the Demensdeum website and in the Github-Roads, along with metadan.

If interested, see or just evaluate visually, I will be glad:
https://opensea.io/collection/demens-tv-heads
https://github.com/demensdeum/demens-tv-heads-collection
https://demensdeum.com/collections/demens-tv-heads/fierce.png
https://demensdeum.com/collections/demens-tv-heads/fierce-metadata.txt

Super programmer

Who is he – this mysterious, ephemeral, almost mythical super programmer? A person whose code is compiled the first time is launched from half -pike and immediately goes into the Prod. The legend transmitted in bytes from senor to jun. The one who writes bugs specifically so that others are not bored. Let’s honestly, with warmth and irony, we will figure out what superpowers he must have to wear this digital cloak.

1. Writes on C/C ++ without a unified vulnerability
Buffer Overflow? NEVER HeARD of it.
The super programmer in C ++ has no inconvenientized variables – they themselves are initialized from respect. He writes New Char [256], and the compiler silently adds a check of borders. Where others put a breakpoint – he glance. And the bug disappears.

2. Writes Fichs without bugs and testing
He does not need tests. His code tests himself at night when he sleeps (although … does he sleep?). Any line is a final stable version, immediately with the support of 12 languages ​​and the NASA Accessible level. And if the bug still came across, then the Universe is testing him.

3. It works faster than AI
While Chatgpt is printing “What a good question!”, The super programmer has already locked the new OS, ported it to the toaster and documented everything in Markdown with diagrams. He does not ask Stackoverflow – he supports him with his questions from the future. GPT is studying on his communities.

4. He understands someone else’s code better than the author
“Of course, I wrote it … But I do not understand how it works.” – An ordinary author.
“Oh, this is due to the recursive call in line 894, which is tied to the side effect in the REGEX filter. Smart.” – Super programmer without blinking.
He reads Perl on the first attempt, understands the abbreviations in the names of variables, and bugs captures by vibration of the cursor.

5. Writes the cross -platform code on the assembler
Why write on Rust, if possible on pure X86, ARM and RISC-V right away, with a flag “works everywhere”? He has his own table of the Oppodes. Even CPU thinks before spoiling his instructions. He does not optimize – he transcends.

6. He answers questions about the deadlines up to a second
“When will it be ready?”
“After 2 hours, 17 minutes and 8 seconds. And yes, this is taking into account the bugs, a smoke break and one philosophical question in the chat.”
If someone asks to do faster-he simply rebuilds the space-time through Make -jives.

7. Reversees and repairing proprietary frameworks
Proprietary SDK fell off, API without documentation, everything is encrypted by Base92 and coughs Segfault’s? For a super -programmer, this is an ordinary Tuesday. He will open a binary, inhale HEX, and an hour later there will be a patch with a fix, improvements of performance and added Dark Mode.

8. Designer and UX specialist for himself
UI comes out for him that people cry with beauty, and the buttons are guessed by intuition. Even cats cope – verified. He does not draw an interface – he opens his inner essence, like a sculptor in marble. Each press is delighted.

9. Conducts marketing research between commits
Between Git Push and Coffee Break, he manages to collect market analytics, build a sales funnel and rethink the monetization strategy. On weekends tests hypotheses. He has A/B tests are launched automatically when he opens a laptop.

10. repeats Microsoft alone
That for corporations 10 years and a thousand engineers, for him – Friday evening and good pizza. Windows 11? Did Windows 12. Office? Already there. Excel? He works on voice management and helps to plan a vacation. Everything works better and weighs less.

11. unfolds and supports infrastructure for 1 million users
His homemade NAS is a Kubernetes Clister. Monitoring? Grafana with memes. It unfolds the API faster than some manage to open Postman. He has everything documented, automated and reliably like a Soviet teapot.

12. Technical support is not required
Users do not complain about it. They just use it with reverence. FAQ? Not needed. Tutorials? Intuition will tell. He is the only developer who has the “Help” button to the gratitude page.

13. He does not sleep, does not eat, is not distracted
He feeds on caffeine and a pure desire to write a code. Instead of sleep, refactoring. Instead of eating – Debian Packages. His life cycle is a continuous development cycle. CI/CD is not Pipeline, this is a lifestyle.

14. communicates with customers without pain
“We need to make Uber, but only better, in two days.” – “Look: here is Roadmap, here are the risks, here is the MVP. And let us first decide on the goals.”
He knows how to say no “so that the customer replies:” Thank you, now I understand what I want. ”

15. instantly programs nuclear reactors
How much heat is released when the uranium nucleus is split? The super -programmer knows. And he knows how to steal it in Rust, C, Swift, even in Excel. Its reactor is not only safe – it is also updated by OTA.

16. has knowledge in all possible areas
Philosophy, physics, tax reporting of Mongolia – everything in his head. He participates in quizzes, where he is a leader. If he doesn’t know something, he simply temporarily turned off the memory to make room for new knowledge. Now it will return.

17. Knows all algorithms and design patterns
No need to explain to him how A*, Dijkstra or Singleton works. He came up with them. With him, the patterns behave correctly. Even antipattterns are corrected themselves – from shame.

18. worked in Apple, Google and left boredom
He was everywhere: Apple, Google, NASA, IKEA (tested the cabinet interface). Then I realized that it was already too good, and went to develop free open-source projects for pleasure. He does not need money because:

19. He has Pontid Bitcoin and he is Satoshi Nakamoto
Yes, it’s him. Just does not say. All wallets with millions of BTC are actually on his flash drive, walled up in concrete. In the meantime, he writes Backend for a farmer cooperative in the outback, because “it was interesting to try Kotlin Multiplatform.”

Conclusion: A little seriousness
In fact, programmers are ordinary people.
We are mistaken. We get tired. Sometimes we are so confident in ourselves that we do not see the obvious – and it is then that the most expensive mistakes in the history of it are made.

Therefore, it is worth remembering:

* It is impossible to know everything – but it is important to know where to look.
* Working in a team is not a weakness, but a path to a better decision.
* The tools that protect us are not “crutches”, but armor.
* Ask is normal. To doubt is right. To make mistakes is inevitable. Learning is necessary.
* Irony is our shield. The code is our weapon. Responsibility is our compass.

And legends about a super -programmer are a reminder that we all sometimes strive for the impossible. And this is precisely in this – real programming magic.

Why documentation is your best friend

(and how not to be a guru whose advice ceases to work after the update)

“Apps may only use public apis and must run on the currently shipping os.” Apple App Review Guidelines

If you have ever started working with a new framework and caught yourself thinking: “Now I’ll understand everything, documentation is for bores”-you are definitely not alone. Many developers have a natural instinct: first try, and only then – read. This is fine.

But it is at this stage that you can easily turn off the right path and find yourself in a situation where the code works … But only today, and only “I have.”

Why is it easy to “figure it out” – is it not enough?

Freimvorki, especially closed and proprietary, are complex and multi -layered. They have a lot of hidden logic, optimization and implementation features, which:

* not documented;
* not guaranteed;
* can change at any time;
* are a commercial secret and can be protected by patents
* contain bugs, flaws that are known only to the developers of the framework.

When you act “on a hunch”, you can easily build architecture in random observations, instead of support on the clearly described rules. This leads to the fact that the code becomes vulnerable to updates and EDGE cases.

Documentation is not a restriction, but support

The developers of frameworks create manuals for a reason – this is an agreement between you and them. While you are acting as part of the documentation, they promise:

* stability;
* support;
* predictable behavior.

If you go beyond this framework – everything that happens next becomes exclusively your responsibility.

Experiments? Certainly. But in the framework of the rules.
Curiosity is the developer’s super -via. Explore, try non -standard, test boundaries – all this is necessary. But there is an important “but”:

You need to experiment in the framework of the documentation and Best Practices.

Documentation is not a prison, but a card. She shows what opportunities are really planned and supported. It is such experiments that are not only useful, but also safe.

Caution: Guru

Sometimes you may encounter real “experts”:

* They conduct courses
* perform at conferences,
* write books and blogs,
* shared “their approach” to the framework.

But even if they sound convincing, it is important to remember:
If their approaches are contrary to documentation, they are unstable.

Such “empirical patterns” can:

* work only on a specific version of the framework;
* be vulnerable to updates;
* Break in unpredictable situations.

Guru is cool when they respect the manuals. Otherwise, their tips must be filtered through official documentation.

A little Solid

Three ideas from Solid principles are especially relevant here:

* Open/Closed Principle: Expand the behavior through a public API, do not go into the insides.
* Liskov Substition Principle: Do not rely on implementation, rely on the contract. Disorders – everything will break when replacing the implementation.
* Dependency Inversion: high -level modules should not depend on low -level modules. Both types should depend on abstractions. Abstraction should not depend on the details. Details should depend on abstractions.

What does this mean in practice? If you use a framework and directly tied to its internal details – you violate this principle.
Instead, you need to build dependence on public interfaces, protocols and contracts that the framework officially supports. This gives:

* the best isolation of your code from changes in the framework;
* the ability to easily test and replace dependencies;
* predictable behavior and stability of architecture.

When your code depends on the details, and not on abstractions, you literally embed yourself in a specific implementation that can disappear or change at any time.

And if the bug?

Sometimes it happens that you did everything right, but it works incorrectly. This happens – frameworks are not perfect. In this case:

* Gather a minimum reproduced example.
* Make sure you use only documented API.
* Send a bug-port-they will definitely understand you and, most likely, will help.

If the example is built on hacks or bypasses, the developers are not required to support it, and most likely your case will simply miss.

How to squeeze the maximum from the framework

* Read the documentation. Seriously.
* Follow the guides and recommendations from the authors.
* Experiment – but within the described.
* Check the tips (even the most famous speakers!) Through the manual.
* Fold bugs with minimal cases and respect for the contract.

Conclusion

Freimvorki are not black boxes, but tools that have the rules of use. To ignore them means writing the code “at random”. But we want our code to live for a long time, delight users, and does not break from the minor update.

So: trust, but check. And yes, read the manuals. They are your superpower.

Sources

https://developer.apple.com/app-store/review/guidelines/
https://en.wikipedia.org/wiki/SOLID
https://en.wikipedia.org/wiki/API
https://en.wikipedia.org/wiki/RTFM

Docker safety: Why is the launch of Root is a bad idea

Docker has become an indispensable tool in modern Devops and development. It allows you to isolate the encirclement, simplify the outfit and quickly scale applications. However, by default, Docker requires a ROOT, and this creates a potentially dangerous zone, which is often ignored in the early stages.

Why does Docker work from Root?

Docker uses the capabilities of the Linux: Cgroups, Namespaces, Iptables, Mount, Networking and other system functions. These operations are available only to the super -user.

That’s why:
* Dockerd demon starts from Root,
* Docker commands are transmitted to this demon.

This simplifies the work and gives full control over the system, but at the same time it opens up potential vulnerabilities.

Why is it dangerous: Container Breakout, CVE, RCE

Container Breakout

With weak insulation, an attacker can use Chroot or Pivot_root to enter the host.

Examples of real attacks:

* CVE-2019-5736-vulnerability to RUNC, allowed to rewrite the application and execute the code on the host.
* CVE-2021-3156-vulnerability to SUDO, allowed to get a ROOT inside the container and get out.

RCE (Remote Code Execution)

If the application in the container is vulnerable and starts from Root, RCE = full control over the host.

Rootless Docker: Solution of the problem

To minimize these risks, Rootless mode appeared in Docker. In this mode, both the demon and the containers are launched on behalf of the usual user, without any Root-privilegies. This means that even if an attacker receives control over the container, he will not be able to harm the host system.
There are restrictions: you can not use ports below 1024 (for example, 80 and 443), the –privileged mode, as well as some network modes, is not available. However, in most development scenarios and CI/CD Rootless Docker, it copes with its task and significantly increases the level of security.

Historically, launch from Root – Antipattern

From the very beginning, the principle of the smallest privileges has been applied in the Unix/Linux world. The fewer rights the process, the less harm it can do. Docker initially demanded a Root access, but today it is considered a potential threat.

Sources

https://docs.docker.com/engine/security/rootless/
https://rootlesscontaine.rs/

The non-obvious problem of Docker containers: hidden vulnerabilities

The non-obvious problem of Docker containers: hidden vulnerabilities

What is “Dependensky Hell” (DH)?

“Dependency Hell” (DH) is a term denoting a problem that arises when managing dependencies in the software. Its main reasons are in the conflict of versions, the difficulties of integrating various libraries and the need to maintain compatibility between them. DH includes the following aspects:

– Conflicts of versions: projects often require specific versions of libraries, and different components can depend on incompatible versions of the same library.
– Difficulties in updates: Dependencies updating can lead to unexpected errors or compatibility breakdown, even if a new version contains corrections or improvements.
– the surroundings: the desire to isolate and stabilize the environment led to the use of virtual environments, containerization and other solutions aimed at simplifying dependence management.

It is important to note that although the elimination of vulnerabilities is one of the reasons for the release of updated versions of the libraries, it is not the main driving force of DH. The main problem is that each change – whether it is correcting bugs, adding a new functionality or eliminating vulnerability – can cause a chain of dependencies that complicate the stable development and support of the application.

How did the fight against DH led to the creation of Docker?

In an attempt to solve the problems DH, the developers were looking for ways to create isolated and stable surroundings for applications. Docker was a response to this challenge. Containerization allows:

– isolate the environment: all dependencies and libraries are packaged along with the application, which guarantees stable work anywhere where Docker is installed.
– Simplify the deployment: the developer can once configure the environment and use it to deploy on any servers without additional settings.
– minimize conflicts: since each application works in its own container, the risk of conflicts between the dependencies of various projects is significantly reduced.

Thus, Docker proposed an effective solution to combat the DH problem, allowing developers to focus on the logic of the application, and not on the difficulties of setting up the environment.

The problem of outdated dependencies in doCker

Despite all the advantages of Docker, a new direction of problems has appeared – the obsolescence of addictions. This happens for several reasons:

1. The container freezes in time

When creating a Docker image, a certain state of all packages and libraries is fixed. Even if after assembly in the basic image (for example, `ubuntu: 04.20,` python: 3.9`, `node: 18-alpine`), vulnerabilities are found or new versions are produced, the container continues to work with the initially installed versions. If the image is not to be sent, the application can work with obsolete and potentially vulnerable components for years.

2. Lack of automatic updates

Unlike traditional servers, where you can configure automatic packages update through system managers (for example, `Apt Upgrade` or` NPM Update`), containers are not updated automatically. The update occurs only when re -electing the image, which requires discipline and regular control.

3. Fixed dependencies

To ensure stability, the developers often fix the version of dependencies in files like `redirements.txt` or` package.json`. This approach prevents unexpected changes, but at the same time freezes the state of dependencies, even if errors or vulnerability are subsequently detected in them.

4. Using obsolete basic images

The basic images selected for containers can also be outdated over time. For example, if the application is built on the image of `node: 16`, and the developers have already switched to` node: 18 ‘due to improvements and corrections, your environment will remain with an outdated version, even if everything works correctly inside the code.

How to avoid problems with outdated dependencies?

Include regular inspections for outdated dependencies and vulnerabilities in the CI/CD process:

– For Python:

pip list --outdated

– for node.js:

npm outdated

– Use tools to analyze vulnerabilities, for example, `trivy`:

trivy image my-app

Monitor the updates of the basic images

Subscribe to the updates of the basic images in Docker Hub or the corresponding repositories on GitHub in order to timely learn about critical corrections and updates.

Conclusion

The problem of dependency Hell arose not only because of the need to eliminate vulnerability, but also as a result of difficulties in managing and updating dependencies. Docker has proposed an effective solution to combat DH, providing isolated and stable surroundings for applications. However, with the advent of containerization, a new task arose – the need for regular renewal of images in order to prevent the obsolescence of dependencies and the appearance of critical vulnerability.

It is important for modern DevOPS specialists not only to solve the problems of conflicts of versions, but also to introduce regularly and automated control practices for the relevance of addictions so that the containers remain safe and effective.

Builder Pattern: Phased Creating an object in time

Introduction

The last article examined the general case of using the Builder pattern, but the option was not touched upon when the object is created in stages in time.
Builder pattern (builder) is a generating design template that allows you to gradually create complex objects. It is especially useful when the object has many parameters or various configurations. One of the interesting examples of its use is the ability to separate the process of creating an object in time.
Sometimes the object cannot be created immediately – its parameters can become known at different stages of the program.

An example on Python

In this example, the object of the car is created in stages: first, part of the data is loaded from the server, then the user enters the missing information.

import requests

def fetch_car_data():
    response = requests.get("https://api.example.com/car-info")
    return response.json()

builder = CarBuilder()

# Backend API data
car_data = fetch_car_data()
builder.set_model(car_data["model"])
builder.set_year(car_data["year"])

# User input
color = input("Car color: ")
builder.set_color(color)

gps_option = input("GPS feature? (yes/no): ").lower() == "yes"
builder.set_gps(gps_option)

car = builder.build()
print(car)

Imagine an API call, data entry occur in different parts of the application, or even in different libraries. Then the use of the Builder pattern becomes more obvious than in a simple example above.

Advantages

– the output is an immune structure that does not need to store optional data for temporary assembly
– The object is collected gradually
– avoiding complex designers
– The assembly code of the object is incomplinge only in one essence of Builder
– Convenience of understanding code

Sources

https://www.amazon.com/Design-Patterns-Object-Oriented-Addison-Wesley-Professional-ebook/dp/B000SEIBB8
https://demensdeum.com/blog/2019/09/23/builder-pattern/

Demensdeum Coding Challenge #1

Start Demensdeum Coding Challenge #1
Prize 100 USDT
1. We need to write Render pictures for Windows 11 64 bit
2. Render
https://demensdeum.com/logo/demens1.png
3. The picture should be integrated into the application
4. Graphic API – Direct3D or DirectDRAW
5. Wins that whose application turned out to be the smallest in size in bytes
6. The image should be 1 in 1 posely as original, save colors
7. Any languages/frameworks do not require additional installation -> so that you can start immediately from the application. For example, if the solution is only one Python script, then such a solution is not suitable. The installation of Python, Pygame and manually launch are required. Good example: Python Script wrapped together with Python and Pygame in EXE, which starts without additional installations.
8. Give in the form of a link to a public repository with the source code, instructions for assembling the application. Good example: a project with the assembly instructions at Visual Studio Community Edition

Deadline: June 1 Summing up the contest

Reference solution on ZIG + SDL3 + SDL3_Image:
https://github.com/demensdeum/DemensDeum-Coding-Challenge-1

Why I chose WordPress

When I thought about creating my own blog in 2015, I faced the question: which platform to choose? After much searching and comparison, I settled on WordPress. This was not a random choice, but the result of analyzing the platform’s capabilities, its advantages and disadvantages. Today, I would like to share my thoughts and experience using WordPress.

Advantages of WordPress

  • Ease of use
    One of the main reasons why I chose WordPress is its intuitive interface. Even if you have never worked with a CMS before, you can master WordPress in a matter of days.
  • A huge number of plugins
    WordPress provides access to thousands of free and paid plugins. These extensions allow you to add almost any functionality related to blogging, from SEO optimization to social media integration.
  • Scalability
    WordPress is great for blogs of all sizes. Having started with a simple personal blog, I know I can easily grow it by adding new features and functionality.
  • Wide selection of topics
    There are a huge number of free and paid themes available on WordPress that will allow you to create a pretty good looking blog in a short time. Creating a custom design will require the sensitive hand of a designer.
  • SEO-friendly
    WordPress is designed to be search engine friendly by default. Plugins like Yoast SEO make it easy to optimize your content to improve its search rankings.
  • Community and Support
    WordPress has one of the largest communities in the world. If you have a problem, you’ll almost certainly find a solution on forums or blogs dedicated to the platform.
  • Multilingual support
    Thanks to plugins like WPGlobus, I can blog in multiple languages, which is especially important when working with an audience from different countries.

Disadvantages of WordPress

  • Vulnerability to attacks
    WordPress’ popularity makes it a target for hackers. Without proper protection, your site can become a victim of attacks. However, regular updates and installing security plugins help minimize the risks.
  • Plugin Dependency
    Sometimes the functionality you want to add requires installing multiple plugins. This can slow down your blog and cause conflicts between extensions.
  • Performance Issues
    On large blogs, WordPress can start to slow down, especially if many plugins are used. To solve this problem, you need to optimize the database, implement caching, and use a more powerful hosting.
  • Cost of some functions
    While the basic version of WordPress is free, many professional themes and plugins cost money. Sometimes you have to invest to get all the features.

Conclusion

WordPress is a tool that provides the perfect balance between simplicity and power. For me, its advantages outweigh the disadvantages, especially considering the large number of solutions to overcome them. Thanks to WordPress, I was able to create a blog that perfectly suits my needs.

Wordex – speed reading program for iOS

I recently found a speed reading app that I would like to recommend to you.

Speed ​​reading is a skill that can greatly increase your productivity, improve your reading comprehension, and save you time. There are many apps on the market that promise to help you master this skill, but Wordex for iOS stands out among them. In this article, we will tell you what Wordex is, what features it has, who it is suitable for, and why it is worth considering.

What is Wordex?

Wordex is an iOS app designed specifically to develop speed reading skills. It helps users read texts faster, focus on key ideas, and avoid distractions. The program is based on scientific approaches and offers convenient tools to improve reading speed.

Main features of Wordex

  • Speed ​​reading mode: text is displayed in an optimized manner for quick comprehension. Users can adjust the speed of text display depending on their needs.
  • Progress Analysis: The program provides detailed statistics, including reading speed and improvement dynamics. This helps you evaluate your progress and adjust your approach to reading.
  • Text import: Wordex allows you to upload your own texts for practice. You can read articles, books or training materials directly in the application.
  • Intuitive interface: the application is designed in a minimalist style, which makes it easy to use. Even beginners will easily understand the functionality.


Wordex Screenshot 1

Who is Wordex suitable for?

Wordex is ideal for:

  • Students: who need to quickly read course materials and prepare for exams.
  • For businessmen and office workers: who want to process a large amount of information in a minimum amount of time.
  • For readers: who want to read more books and enjoy the process.


Wordex Screenshot 2

Advantages of Wordex

  • Mobility: you can exercise anywhere and anytime thanks to the app on your iPhone or iPad.
  • Personalization: the ability to customize the display of text to suit your needs.


Wordex Screenshot 3

Why try Wordex?

Wordex is not just a tool for learning speed reading. It is a program that develops concentration, expands vocabulary and increases productivity. Once you try Wordex, you will notice how reading stops being a routine and turns into an exciting activity.

Conclusion

If you want to learn speed reading or improve your existing skills, Wordex is a great choice. Easy to use and effective, the app will help you achieve your goals and save valuable time. Download Wordex from the App Store and start practicing today!

AppStore:
https://apps.apple.com/us/app/speed-reading-book-reader-app/id1462633104

Why is DRY important?

There are many articles on the topic of DRY, I recommend reading the original “The Pragmatic Programmer” by Andy Hunt and Dave Thomas. However, I still see many developers having questions about this principle in software development.

The DRY principle states that we must not repeat ourselves, this applies to both code and the processes we perform as programmers. An example of code that violates DRY:

class Client {
    public let name: String
    private var messages: [String] = []
    
    init(name: String) {
        self.name = name
    }
    
    func receive(_ message: String) {
        messages.append(message)
    }
}

class ClientController {
    func greet(client: Client?) {
        guard let client else {
            debugPrint("No client!")
            return
        }
        client.receive("Hello \(client.name)!")
    }

    func goodbye(client: Client?) {
        guard let client else {
            debugPrint("No client!!")
            return
        }
        client.receive("Bye \(client.name)!")
    }
}

As you can see, in the greet and goodbye methods, an optional instance of the Client class is passed, which then needs to be checked for nil, and then work with it can begin. To comply with the DRY method, you need to remove the repeated check for nil for the class instance. This can be implemented in many ways, one option is to pass the instance to the class constructor, after which the need for checks will disappear.

We maintain DRY by specializing ClientController on a single Client instance:

class Client {
    public let name: String
    private var messages: [String] = []
    
    init(name: String) {
        self.name = name
    }
    
    func receive(_ message: String) {
        messages.append(message)
    }
}

class ClientController {
    private let client: Client

    init(client: Client) {
        self.client = client
    }

    func greet() {
        client.receive("Hello \(client.name)!")
    }

    func goodbye() {
        client.receive("Bye \(client.name)!")
    }
}

DRY also concerns the processes that occur during software development. Let’s imagine a situation in which a team of developers has to release a release to the market themselves, distracting them from software development, this is also a violation of DRY. This situation is resolved by connecting a CI/CD pipeline, in which the release is released automatically, subject to certain conditions by the developers.

In general, DRY is about the absence of repetitions both in processes and in code, this is also important due to the presence of the human factor: code that contains less repetitive, noisy code is easier to check for errors; Automated processes do not allow people to make mistakes when performing them, because there is no human involved.

Steve Jobs had a saying, “A line of code you never have to write is a line of code you never have to debug.”

Sources

https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/
https://youtu.be/-msIEOGvTYM

I will help you with iOS development for Swift or Objective-C

I am happy to announce that I am now offering my services as an iOS developer on Fiverr. If you need help developing quality iOS apps or improving your existing projects, check out my profile:
https://www.fiverr.com/s/Q7x4kb6

I would be glad to have the opportunity to work on your project.
Email: demensdeum@gmail.com
Telegram: https://t.me/demensdeum

Dynamic Linking of Qt Applications on macOS

Today I have released a version of RaidenVideoRipper for Apple devices with macOS and M1/M2/M3/M4 processors (Apple Silicon). RaidenVideoRipper is a quick video editing application that allows you to cut a part of a video file into a new file. You can also make gif, export the audio track to mp3.

Below I will briefly describe what commands I used to do this. The theory of what is happening here, documentation of utilities, can be read at the following links:
https://www.unix.com/man-page/osx/1/otool/
https://www.unix.com/man-page/osx/1/install_name_tool/
https://llvm.org/docs/CommandGuide/llvm-nm.html
https://linux.die.net/man/1/file
https://www.unix.com/man-page/osx/8/SPCTL/
https://linux.die.net/man/1/chmod
https://linux.die.net/man/1/ls
https://man7.org/linux/man-pages/man7/xattr.7.html
https://doc.qt.io/qt-6/macos-deployment.html

First, install Qt on your macOS, also install the environment for Qt Desktop Development. After that, build your project, for example, in Qt Creator, then I will describe what is needed so that dependencies with external dynamic libraries work correctly when distributing the application to end users.

Create a Frameworks directory in the YOUR_APP.app/Contents folder of your application, put external dependencies in it. For example, this is what Frameworks looks like for the RaidenVideoRipper application:

Frameworks
├── DullahanFFmpeg.framework
│   ├── dullahan_ffmpeg.a
│   ├── libavcodec.60.dylib
│   ├── libavdevice.60.dylib
│   ├── libavfilter.9.dylib
│   ├── libavformat.60.dylib
│   ├── libavutil.58.dylib
│   ├── libpostproc.57.dylib
│   ├── libswresample.4.dylib
│   └── libswscale.7.dylib
├── QtCore.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtCore -> Versions/Current/QtCore
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtGui.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtGui -> Versions/Current/QtGui
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtMultimedia.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtMultimedia -> Versions/Current/QtMultimedia
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtMultimediaWidgets.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtMultimediaWidgets -> Versions/Current/QtMultimediaWidgets
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtNetwork.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtNetwork -> Versions/Current/QtNetwork
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
└── QtWidgets.framework
    ├── Headers -> Versions/Current/Headers
    ├── QtWidgets -> Versions/Current/QtWidgets
    ├── Resources -> Versions/Current/Resources
    └── Versions

For simplicity, I printed out only the second level of nesting.

Next, we print the current dynamic dependencies of your application:

otool -L RaidenVideoRipper 

Output for the RaidenVideoRipper binary, which is located in RaidenVideoRipper.app/Contents/MacOS:

RaidenVideoRipper:
	@rpath/DullahanFFmpeg.framework/dullahan_ffmpeg.a (compatibility version 0.0.0, current version 0.0.0)
	@rpath/QtMultimediaWidgets.framework/Versions/A/QtMultimediaWidgets (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtWidgets.framework/Versions/A/QtWidgets (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtMultimedia.framework/Versions/A/QtMultimedia (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtGui.framework/Versions/A/QtGui (compatibility version 6.0.0, current version 6.8.1)
	/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit (compatibility version 45.0.0, current version 2575.20.19)
	/System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO (compatibility version 1.0.0, current version 1.0.0)
	/System/Library/Frameworks/Metal.framework/Versions/A/Metal (compatibility version 1.0.0, current version 367.4.0)
	@rpath/QtNetwork.framework/Versions/A/QtNetwork (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtCore.framework/Versions/A/QtCore (compatibility version 6.0.0, current version 6.8.1)
	/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)
	/System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration (compatibility version 1.0.0, current version 1.0.0)
	/System/Library/Frameworks/UniformTypeIdentifiers.framework/Versions/A/UniformTypeIdentifiers (compatibility version 1.0.0, current version 709.0.0)
	/System/Library/Frameworks/AGL.framework/Versions/A/AGL (compatibility version 1.0.0, current version 1.0.0)
	/System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL (compatibility version 1.0.0, current version 1.0.0)
	/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1800.101.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1351.0.0)

As you can see in RaidenVideoRipper in dependencies Qt and dullahan_ffmpeg. Dullahan FFmpeg is a fork of FFmpeg that encapsulates its functionality in a dynamic library, with the ability to get the current progress of execution and cancel, using C procedures.
Next, replace the paths of the application and all necessary libraries using install_name_tool.

The command for this is:

install_name_tool -change old_path new_path target

Example of use:

install_name_tool -change /usr/local/lib/libavfilter.9.dylib @rpath/DullahanFFmpeg.framework/libavfilter.9.dylib dullahan_ffmpeg.a

After you have entered all the correct paths, the application should start correctly. Check that all paths to the libraries are relative, move the binary, and open it again.
If you see any error, check the paths via otool and change them again via install_name_tool.

There is also an error with dependency confusion, when the library you replaced does not have a symbol in the table, you can check the presence or absence of the symbol like this:

nm -gU path

Once executed, you will see the entire symbol table of the library or application.
It is also possible that you will copy dependencies of the wrong architecture, you can check this using file:

file path

The file utility will show you what architecture a library or application belongs to.

Also, Qt requires the presence of the Plugins folder in the Contents folder of your YOUR_APP.app directory, copy the plugins from Qt to Contents. Next, check the functionality of the application, after that you can start optimizing the Plugins folder by deleting items from this folder and testing the application.

macOS Security

Once you have copied all the dependencies and fixed the paths for dynamic linking, you will need to sign the application with the developer’s signature, and also send a version of the application to Apple for notarization.

If you don’t have $100 for a developer license or don’t want to sign anything, then write instructions for your users on how to launch the application.

This instruction also works for RaidenVideoRipper:

  • Disabling Gatekeeper: spctl –master-disable
  • Allow launch from any sources in Privacy & Security: Allow applications switch to Anywhere
  • Remove quarantine flag after downloading from zip or dmg application: xattr -d com.apple.quarantine app.dmg
  • Check that the quarantine flag (com.apple.quarantine) is missing: ls -l@ app.dmg
  • Add confirm the launch of the application if necessary in Privacy & Security

The error with the quarantine flag is usually reproduced by the error “The application is damaged” appearing on the user’s screen. In this case, you need to remove the quarantine flag from the metadata.

Link to RaidenVideoRipper build for Apple Silicon:
https://github.com/demensdeum/RaidenVideoRipper/releases/download/1.0.1.0/RaidenVideoRipper-1.0.1.0.dmg

Video stabilization with ffmpeg

If you want to stabilize your video and remove camera shake, the `ffmpeg` tool offers a powerful solution. Thanks to the built-in `vidstabdetect` and `vidstabtransform` filters, you can achieve professional results without using complex video editors.

Preparing for work

Before you begin, make sure your `ffmpeg` supports the `vidstab` library. On Linux, you can check this with the command:

bash  
ffmpeg -filters | grep vidstab  

If the library is not installed, you can add it:

sudo apt install ffmpeg libvidstab-dev  

Installation for macOS via brew:

brew install libvidstab
brew install ffmpeg

Now let’s move on to the process.

Step 1: Movement Analysis

First, you need to analyze the video motion and create a file with stabilization parameters.

ffmpeg -i input.mp4 -vf vidstabdetect=shakiness=10:accuracy=15 transfile=transforms.trf -f null -  

Parameters:

shakiness: The level of video shaking (default 5, can be increased to 10 for more severe cases).
accuracy: Analysis accuracy (default 15).
transfile: The name of the file to save the movement parameters.

Step 2: Applying Stabilization

Now you can apply stabilization using the transformation file:

ffmpeg -i input.mp4 -vf vidstabtransform=input=transforms.trf:zoom=5 output.mp4

Parameters:

input: Points to the file with transformation parameters (created in the first step).
zoom: Zoom factor to remove black edges (e.g. 5 – automatically zooms in until artifacts are removed).

Local neural networks using ollama

If you wanted to run something like ChatGPT and you have a powerful enough computer, for example with an Nvidia RTX video card, then you can run the ollama project, which will allow you to use one of the ready-made LLM models on a local machine, absolutely free. ollama provides the ability to communicate with LLM models, like ChatGPT, and the latest version also announced the ability to read images, format output data in json format.

I also launched the project on a MacBook with an Apple M2 processor, and I know that the latest models of AMD video cards are supported.

To install on macOS, visit the ollama website:
https://ollama.com/download/mac

Click “Download for macOS”, you will download an archive of the form ollama-darwin.zip, inside the archive there will be Ollama.app which you need to copy to “Applications”. After that, launch Ollama.app, most likely the installation process will occur at the first launch. After that, you saw the ollama icon in the tray, the tray is on the right top next to the clock.

After that, launch a regular macOS terminal and type the command to download, install and launch any ollama model. The list of available models, descriptions, and their characteristics can be found on the ollama website:
https://ollama.com/search

Choose the model with the least number of parameters if it does not fit into your video card at startup.

For example, the commands to launch the llama3.1:latest model:


ollama run llama3.1:latest

Installation for Windows and Linux is generally similar, in one case there will be an ollama installer and further work with it through Powershell.
For Linux, the installation is done by a script, but I recommend using the version of your specific package manager. In Linux, ollama can also be launched through a regular bash terminal.

Sources
https://www.youtube.com/watch?v=Wjrdr0NU4Sk
https://ollama.com

Unreal Engine on Macbook M2

If you were able to run Unreal Engine 5 Editor on a Macbook with an Apple processor, you may have noticed that this thing slows down quite a bit.

To increase the performance of the editor and engine, set Engine Scalability Settings -> Medium. After that, the engine will start drawing everything not so beautifully, but you will be able to work normally with the engine on your Macbook.