Block diagrams in practice without formalin

The block diagram is a visual tool that helps to turn a complex algorithm into an understandable and structured sequence of actions. From programming to business process management, they serve as a universal language for visualization, analysis and optimization of the most complex systems.

Imagine a map where instead of roads is logic, and instead of cities – actions. This is a block diagram-an indispensable tool for navigation in the most confusing processes.

Example 1: Simplified game launching scheme
To understand the principle of work, let’s present a simple game launch scheme.

This scheme shows the perfect script when everything happens without failures. But in real life, everything is much more complicated.

Example 2: Expanded scheme for starting the game with data loading
Modern games often require Internet connection to download user data, saving or settings. Let’s add these steps to our scheme.

This scheme is already more realistic, but what will happen if something goes wrong?

How was it: a game that “broke” with the loss of the Internet

At the start of the project, developers could not take into account all possible scenarios. For example, they focused on the main logic of the game and did not think what would happen if the player has an Internet connection.

In such a situation, the block diagram of their code would look like this:

In this case, instead of issuing an error or closing correctly, the game froze at the stage of waiting for data that she did not receive due to the lack of a connection. This led to the “black screen” and freezing the application.

How it became: Correction on user complaints

After numerous users’ complaints about hovering, the developer team realized that we needed to correct the error. They made changes to the code by adding an error processing unit that allows the application to respond correctly to the lack of connection.

This is what the corrected block diagram looks like, where both scenarios are taken into account:

Thanks to this approach, the game now correctly informs the user about the problem, and in some cases it can even go to offline mode, allowing you to continue the game. This is a good example of why block diagrams are so important : they make the developer think not only about the ideal way of execution, but also about all possible failures, making the final product much more stable and reliable.

Uncertain behavior

Hanging and errors are just one examples of unpredictable behavior of the program. In programming, there is a concept of uncertain behavior (undefined behavior) – this is a situation where the standard of the language does not describe how the program should behave in a certain case.

This can lead to anything: from random “garbage” in the withdrawal to the failure of the program or even serious security vulnerability. Indefinite behavior often occurs when working with memory, for example, with lines in the language of C.

An example from the language C:

Imagine that the developer copied the line into the buffer, but forgot to add to the end the zero symbol (`\ 0`) , which marks the end of the line.

This is what the code looks like:

#include 

int main() {
char buffer[5];
char* my_string = "hello";

memcpy(buffer, my_string, 5);

printf("%s\n", buffer);
return 0;
}

Expected result: “Hello”
The real result is unpredictable.

Why is this happening? The `Printf` function with the specifier`%S` expects that the line ends with a zero symbol. If he is not, she will continue to read the memory outside the highlighted buffer.

Here is the block diagram of this process with two possible outcomes:

This is a clear example of why the block diagrams are so important: they make the developer think not only about the ideal way of execution, but also about all possible failures, including such low-level problems, making the final product much more stable and reliable.

LLM Fine-Tune

Currently, all popular LLM service providers use Fine-Tune using JSONL files, which describe the inputs and outputs of the model, with small variations, for example for Gemini, Openai, the format is slightly different.

After downloading a specially formed JSONL file, the process of specialization of the LLM model on the specified dataset begins, for all current well -known LLM providers this service is paid.

For Fine-Tune on a local machine using Ollama, I recommend relying on a detailed video from the YouTube channel Tech with TIM-Easiest Way to fine-tune a llm and us it with alloma:
https://www.youtube.com/watch?v=pTaSDVz0gok

An example of a JUPYTER laptop with the preparation of JSONL Dataset from exports of all Telegram messages and launching the local Fine-Tune process is available here:
https://github.com/demensdeum/llm-train-example

React Native brief review

React Native has established itself as a powerful tool for cross-platform development of mobile and web applications. It allows you to create native applications for Android and iOS, as well as web applications using a single code base on JavaScript/Typescript.

Fundamentals of architecture and development

React National architecture is based on native bindings from JavaScript/Typescript. This means that the basic business logic and an application in the application are written on JavaScript or Typescript. When access to specific native functionality (for example, the device or GPS camera) is required, these native bindings are used, which allow you to call the code written on SWIFT/Objective-C for iOS or Java/Kotlin for Android.

It is important to note that the resulting platforms may vary in functionality. For example, a certain functionality can be available only for Android and iOS, but not for Web, or vice versa, depending on the native capabilities of the platform.

Configuration and updates
The configuration of native bindings is carried out through the Plugins key. For stable and safe development, it is critical to use the latest versions of React Native components and always turn to current documentation. This helps to avoid compatibility problems and use all the advantages of the latest updates.

Features of development and optimization

React Native can generate resulting projects for specific platforms (for example, Android and iOS folders). This allows the developers, if necessary, patch the files of resulting projects manually for fine optimization or specific settings, which is especially useful for complex applications that require an individual approach to performance.

For typical and simple applications, it is often enough to use EXPO Bandle with built -in native bindings. However, if the application has complex functionality or requires deep customization, it is recommended to use REACT NATIVE custom assemblies.

Eaturability of development and updates

One of the key advantages of React Native is Hot ReLoad support for Typescript/JavaScript code during development. This significantly accelerates the development process, since the code changes are instantly displayed in the application, allowing the developer to see the result in real time.

React Native also supports “Silent Update) bypassing the process of Google Play and Apple App Store, but this is only applicable to Typescript/JavaScript code. This allows you to quickly release errors or small functionality updates without the need to go through a full cycle of publication through applications stores.

It is important to understand that TS/JS Code is bandaged on a specific version of native dependencies using fingerprinting, which ensures the coordination between JavaScript/Typescript part and native part of the application.

Use of LLM in development

Although codhegeneration with LLM (Large Language Models) is possible, its suitability is not always high due to potentially outdated datasets on which the models were trained. This means that the generated code may not correspond to the latest versions of React Native or the best practices.

React Native continues to develop, offering developers a flexible and effective way to create cross -platform applications. It combines the speed of development with the possibility of access to native functions, making it an attractive choice for many projects.

Pixel Perfect: myth or reality in the era of declarativeness?

In the world of interfaces development, there is a common concept – “Pixel Perfect in the Lodge” . It implies the most accurate reproduction of the design machine to the smallest pixel. For a long time it was a gold standard, especially in the era of a classic web design. However, with the arrival of the declarative mile and the rapid growth of the variety of devices, the principle of “Pixel Perfect” is becoming more ephemeral. Let’s try to figure out why.

Imperial Wysiwyg vs. Declarative code: What is the difference?

Traditionally, many interfaces, especially desktop, were created using imperative approaches or Wysiwyg (What You See is What You Get) of editors. In such tools, the designer or developer directly manipulates with elements, placing them on canvas with an accuracy to the pixel. It is similar to working with a graphic editor – you see how your element looks, and you can definitely position it. In this case, the achievement of “Pixel Perfect” was a very real goal.

However, modern development is increasingly based on declarative miles . This means that you do not tell the computer to “put this button here”, but describe what you want to get. For example, instead of indicating the specific coordinates of the element, you describe its properties: “This button should be red, have 16px indentations from all sides and be in the center of the container.” Freimvorki like React, Vue, Swiftui or Jetpack Compose just use this principle.

Why “Pixel Perfect” does not work with a declarative mile for many devices

Imagine that you create an application that should look equally good on the iPhone 15 Pro Max, Samsung Galaxy Fold, iPad Pro and a 4K resolution. Each of these devices has different screen resolution, pixel density, parties and physical sizes.

When you use the declarative approach, the system itself decides how to display your described interface on a particular device, taking into account all its parameters. You set the rules and dependencies, not harsh coordinates.

* Adaptability and responsiveness: The main goal of the declarative miles is to create adaptive and responsive interfaces . This means that your interface should automatically adapt to the size and orientation of the screen without breaking and maintaining readability. If we sought to “Pixel Perfect” on each device, we would have to create countless options for the same interface, which will completely level the advantages of the declarative approach.
* Pixel density (DPI/PPI): The devices have different pixel density. The same element with the size of 100 “virtual” pixels on a device with high density will look much smaller than on a low -density device, if you do not take into account the scaling. Declarative frameworks are abstracted by physical pixels, working with logical units.
* Dynamic content: Content in modern applications is often dynamic – its volume and structure may vary. If we tattered hard to the pixels, any change in text or image would lead to the “collapse” of the layout.
* Various platforms: In addition to the variety of devices, there are different operating systems (iOS, Android, Web, Desktop). Each platform has its own design, standard controls and fonts. An attempt to make an absolutely identical, Pixel Perfect interface on all platforms would lead to an unnatural type and poor user experience.

The old approaches did not go away, but evolved

It is important to understand that the approach to interfaces is not a binary choice between “imperative” and “declarative”. Historically, for each platform there were its own tools and approaches to the creation of interfaces.

* Native interface files: for iOS it were XIB/Storyboards, for Android-XML marking files. These files are a Pixel-PERFECT WYSIWYG layout, which is then displayed in the radio as in the editor. This approach has not disappeared anywhere, it continues to develop, integrating with modern declarative frames. For example, Swiftui in Apple and Jetpack Compose in Android set off on the path of a purely declarative code, but at the same time retained the opportunity to use a classic layout.
* hybrid solutions: Often in real projects, a combination of approaches is used. For example, the basic structure of the application can be implemented declaratively, and for specific, requiring accurate positioning of elements, lower -level, imperative methods can be used or native components developed taking into account the specifics of the platform.

from monolith to adaptability: how the evolution of devices formed a declarative mile

The world of digital interfaces has undergone tremendous changes over the past decades. From stationary computers with fixed permits, we came to the era of exponential growth of the variety of user devices . Today, our applications should work equally well on:

* smartphones of all form factors and screen sizes.
* tablets with their unique orientation modes and a separated screen.
* laptops and desktops with various permits of monitors.
* TVs and media centers , controlled remotely. It is noteworthy that even for TVs, the remarks of which can be simple as Apple TV Remote with a minimum of buttons, or vice versa, overloaded with many functions, modern requirements for interfaces are such that the code should not require specific adaptation for these input features. The interface should work “as if by itself”, without an additional description of what “how” to interact with a specific remote control.
* smart watches and wearable devices with minimalistic screens.
* Virtual reality helmets (VR) , requiring a completely new approach to a spatial interface.
* Augmented reality devices (AR) , applying information on the real world.
* automobile information and entertainment systems .
* And even household appliances : from refrigerators with sensory screens and washing machines with interactive displays to smart ovens and systems of the Smart House.

Each of these devices has its own unique features: physical dimensions, parties ratio, pixel density, input methods (touch screen, mouse, controllers, gestures, vocal commands) and, importantly, the subtleties of the user environment . For example, a VR shlesh requires deep immersion, and a smartphone-fast and intuitive work on the go, while the refrigerator interface should be as simple and large for quick navigation.

Classic approach: The burden of supporting individual interfaces

In the era of the dominance of desktops and the first mobile devices, the usual business was the creation and support of of individual interface files or even a completely separate interface code for each platform .

* Development under iOS often required the use of Storyboards or XIB files in XCode, writing code on Objective-C or SWIFT.
* For Android the XML marking files and the code on Java or Kotlin were created.
* Web interfaces turned on HTML/CSS/JavaScript.
* For C ++ applications on various desktop platforms, their specific frameworks and tools were used:
* In Windows these were MFC (Microsoft Foundation Classes), Win32 API with manual drawing elements or using resource files for dialog windows and control elements.
* Cocoa (Objective-C/Swift) or the old Carbon API for direct control of the graphic interface were used in macos .
* In linux/unix-like systems , libraries like GTK+ or QT were often used, which provided their set of widgets and mechanisms for creating interfaces, often via XML-like marking files (for example, .ui files in Qt Designer) or direct software creation of elements.

This approach ensured maximum control over each platform, allowing you to take into account all its specific features and native elements. However, he had a huge drawback: duplication of efforts and tremendous costs for support . The slightest change in design or functionality required the introduction of a right to several, in fact, independent code bases. This turned into a real nightmare for developer teams, slowing down the output of new functions and increasing the likelihood of errors.

declarative miles: a single language for diversity

It was in response to this rapid complication that the declarative miles appeared as the dominant paradigm. Framws like react, vue, swiftui, jetpack compose and others are not just a new way of writing code, but a fundamental shift in thinking.

The main idea of the declarative approach : Instead of saying the system “how” to draw every element (imperative), we describe “what“ we want to see (declarative). We set the properties and condition of the interface, and the framework decides how to best display it on a particular device.

This became possible thanks to the following key advantages:

1. Abstraction from the details of the platform: declarative fraimvorki are specially designed to forget about low -level details of each platform. The developer describes the components and their relationships at a higher level of abstraction, using a single, transferred code.
2. Automatic adaptation and responsiveness: Freimvorki take responsibility for automatic scaling, changing the layout and adaptation of elements to different sizes of screens, pixel density and input methods. This is achieved through the use of flexible layout systems, such as Flexbox or Grid, and concepts similar to “logical pixels” or “DP”.
3. consistency of user experience: Despite the external differences, the declarative approach allows you to maintain a single logic of behavior and interaction throughout the family of devices. This simplifies the testing process and provides more predictable user experience.
4. Acceleration of development and cost reduction: with the same code capable of working on many platforms, significantly is reduced by the time and cost of development and support . Teams can focus on functionality and design, and not on repeated rewriting the same interface.
5. readiness for the future: the ability to abstract from the specifics of current devices makes the declarative code more more resistant to the emergence of new types of devices and form factors . Freimvorki can be updated to support new technologies, and your already written code will receive this support is relatively seamless.

Conclusion

The declarative mile is not just a fashion trend, but the necessary evolutionary step caused by the rapid development of user devices, including the sphere of the Internet of things (IoT) and smart household appliances. It allows developers and designers to create complex, adaptive and uniform interfaces, without drowning in endless specific implementations for each platform. The transition from imperative control over each pixel to the declarative description of the desired state is a recognition that in the world of the future interfaces should be flexible, transferred and intuitive regardless of which screen they are displayed.

Programmers, designers and users need to learn how to live in this new world. The extra details of the Pixel Perfect, designed to a particular device or resolution, lead to unnecessary time costs for development and support. Moreover, such harsh layouts may simply not work on devices with non-standard interfaces, such as limited input TVs, VR and AR shifts, as well as other devices of the future, which we still do not even know about today. Flexibility and adaptability – these are the keys to the creation of successful interfaces in the modern world.

Why do programmers do nothing even with neural networks

Today, neural networks are used everywhere. Programmers use them to generate code, explain other solutions, automate routine tasks, and even create entire applications from scratch. It would seem that this should lead to an increase in efficiency, reducing errors and acceleration of development. But reality is much more prosaic: many still do not succeed. The neural networks do not solve key problems – they only illuminate the depth of ignorance.

full dependence on LLM instead of understanding

The main reason is that many developers are completely relying on LLM, ignoring the need for a deep understanding of the tools with which they work. Instead of studying documentation – a chat request. Instead of analyzing the reasons for the error – copying the decision. Instead of architectural solutions – the generation of components according to the description. All this can work at a superficial level, but as soon as a non -standard task arises, integration with a real project or the need for fine tuning is required, everything is crumbling.

Lack of context and outdated practices

The neural networks generate the code generalized. They do not take into account the specifics of your platform, version of libraries, environmental restrictions or architectural solutions of the project. What is generated often looks plausible, but has nothing to do with the real, supported code. Even simple recommendations may not work if they belong to the outdated version of the framework or use approaches that have long been recognized as ineffective or unsafe. Models do not understand the context – they rely on statistics. This means that errors and antipattterns, popular in open code, will be reproduced again and again.

redundancy, inefficiency and lack of profiling

The code generated AI is often redundant. It includes unnecessary dependencies, duplicates logic, adds abstractions unnecessarily. It turns out an ineffective, heavy structure that is difficult to support. This is especially acute in mobile development, where the size of the gang, response time and energy consumption are critical.

The neural network does not conduct profiling, does not take into account the restrictions of the CPU and GPU, does not care about the leaks of memory. It does not analyze how effective the code is in practice. Optimization is still handmade, requiring analysis and examination. Without it, the application becomes slow, unstable and resource -intensive, even if they look “right” from the point of view of structure.

Vulnerability and a threat to security

Do not forget about safety. There are already known cases when projects partially or fully created using LLM were successfully hacked. The reasons are typical: the use of unsafe functions, lack of verification of input data, errors in the logic of authorization, leakage through external dependencies. The neural network can generate a vulnerable code simply because it was found in open repositories. Without the participation of security specialists and a full -fledged revision, such errors easily become input points for attacks.

The law is pareto and the essence of the flaws

Pareto law works clearly with neural networks: 80% of the result is achieved due to 20% of effort. The model can generate a large amount of code, create the basis of the project, spread the structure, arrange types, connect modules. However, all this can be outdated, incompatible with current versions of libraries or frameworks, and require significant manual revision. Automation here works rather as a draft that needs to be checked, processed and adapted to specific realities of the project.

Caution optimism

Nevertheless, the future looks encouraging. Constant updating of training datasets, integration with current documentation, automated architecture checks, compliance with design and security patterns – all this can radically change the rules of the game. Perhaps in a few years we can really write the code faster, safer and more efficiently, relying on LLM as a real technical co -author. But for now – alas – a lot has to be checked, rewritten and modified manually.

Neural networks are a powerful tool. But in order for him to work for you, and not against you, you need a base, critical thinking and willingness to take control at any time.

Gingerita Prototype Windows

I present to your attention fork Kate text editor called Gingerita. Why Fork, why, what is the goal? I want to add the functionality that I need in my work, so as not to wait for the correction, adding features from the Kate team, or the adoption of my corrections to the Main branch.
At the moment, a prototype version for Windows is currently available, almost vanilla version of Kate with minimal changes. For Gingerita, I have developed two plugs – an image of the images directly from the editor and the built -in browser, for debugging my web projects or for interacting with AI with assistants such as ChatGPT.

The version for Windows can be tested by the link below:
https://github.com/demensdeum/Gingerita/releases/tag/prototype

Support for Demensdeum products

Welcome to the support page!

If you have questions, problems with Demensdeum products or you want to offer improvements, we are always ready to help.

How to contact us:
support@demensdeum.com

We try to answer appeals within 3-5 business days.

What to indicate in the letter:

The name of the product
Version (if known)
A detailed description of the problem
Screenshots or videos (if possible)
Device and operating system on which the problem arose

We thank you for the use of our products and strive to make your experience as convenient and pleasant as possible.

Sincerely,
Demensdeum team

Vibe-core tricks: why LLM still does not work with Solid, Dry and Clean

With the development of large language models (LLM), such as ChatGPT, more and more developers use them to generate code, design architecture and accelerate integration. However, with practical application, it becomes noticeable: the classical principles of architecture – Solid, Dry, Clean – get along poorly with the peculiarities of the LLM codgendation.

This does not mean that the principles are outdated – on the contrary, they work perfectly with manual development. But with LLM the approach has to be adapted.

Why llm cannot cope with architectural principles

encapsulation

Incapsulation requires understanding the boundaries between parts of the system, knowledge about the intentions of the developer, as well as follow strict access restrictions. LLM often simplifies the structure, make fields public for no reason or duplicate the implementation. This makes the code more vulnerable to errors and violates the architectural boundaries.

Abstracts and interfaces

Design patterns, such as an abstract factory or strategy, require a holistic view of the system and understanding its dynamics. Models can create an interface without a clear purpose without ensuring its implementation, or violate the connection between layers. The result is an excess or non -functional architecture.

Dry (Donolt Repeat Yourself)

LLM do not seek to minimize the repeating code – on the contrary, it is easier for them to duplicate blocks than to make general logic. Although they can offer refactoring on request, by default models tend to generate “self -sufficient” fragments, even if it leads to redundancy.

Clean Architecture

Clean implies a strict hierarchy, independence from frameworks, directed dependence and minimal connectedness between layers. The generation of such a structure requires a global understanding of the system – and LLM work at the level of probability of words, not architectural integrity. Therefore, the code is mixed, with violation of the directions of dependence and a simplified division into levels.

What works better when working with LLM

Wet instead of Dry
The WET (Write EVERYTHING TWICE) approach is more practical in working with LLM. Duplication of code does not require context from the model of retention, which means that the result is predictable and is easier to correctly correct. It also reduces the likelihood of non -obvious connections and bugs.

In addition, duplication helps to compensate for the short memory of the model: if a certain fragment of logic is found in several places, LLM is more likely to take it into account with further generation. This simplifies accompaniment and increases resistance to “forgetting”.

Simple structures instead of encapsulation

Avoiding complex encapsulation and relying on the direct transmission of data between the parts of the code, you can greatly simplify both generation and debugging. This is especially true with a quick iterative development or creation of MVP.

Simplified architecture

A simple, flat structure of the project with a minimum amount of dependencies and abstractions gives a more stable result during generation. The model adapts such a code easier and less often violates the expected connections between the components.

SDK integration – manually reliable

Most language models are trained on outdated versions of documentation. Therefore, when generating instructions for installing SDK, errors often appear: outdated commands, irrelevant parameters or links to inaccessible resources. Practice shows: it is best to use official documentation and manual tuning, leaving LLM an auxiliary role – for example, generating a template code or adaptation of configurations.

Why are the principles still work – but with manual development

It is important to understand that the difficulties from Solid, Dry and Clean concern the codhegeneration through LLM. When the developer writes the code manually, these principles continue to demonstrate their value: they reduce connectedness, simplify support, increase the readability and flexibility of the project.

This is due to the fact that human thinking is prone to generalization. We are looking for patterns, we bring repeating logic into individual entities, create patterns. Probably, this behavior has evolutionary roots: reducing the amount of information saves cognitive resources.

LLM act differently: they do not experience loads from the volume of data and do not strive for savings. On the contrary, it is easier for them to work with duplicate, fragmented information than to build and maintain complex abstractions. That is why it is easier for them to cope with the code without encapsulation, with repeating structures and minimal architectural severity.

Conclusion

Large language models are a useful tool in development, especially in the early stages or when creating an auxiliary code. But it is important to adapt the approach to them: to simplify the architecture, limit abstraction, avoid complex dependencies and not rely on them when configuring SDK.

The principles of Solid, Dry and Clean are still relevant-but they give the best effect in the hands of a person. When working with LLM, it is reasonable to use a simplified, practical style that allows you to get a reliable and understandable code that is easy to finalize manually. And where LLM forgets – duplication of code helps him to remember.

Demens TV Heads Nft

I want to share my new project-the NFT collection “Demens TV Heads”.

This is a series of digital art work, they reflect people of different characters and professions, in the style of the Demensdeum logo.
The first work is Fierce “Grozny” this is a stylized self -portrait.

I plan to release only 12 NFT, one each every month.

Each work exists not only in the Ethereum blockchain, but is also available on the Demensdeum website and in the Github-Roads, along with metadan.

If interested, see or just evaluate visually, I will be glad:
https://opensea.io/collection/demens-tv-heads
https://github.com/demensdeum/demens-tv-heads-collection
https://demensdeum.com/collections/demens-tv-heads/fierce.png
https://demensdeum.com/collections/demens-tv-heads/fierce-metadata.txt

Super programmer

Who is he – this mysterious, ephemeral, almost mythical super programmer? A person whose code is compiled the first time is launched from half -pike and immediately goes into the Prod. The legend transmitted in bytes from senor to jun. The one who writes bugs specifically so that others are not bored. Let’s honestly, with warmth and irony, we will figure out what superpowers he must have to wear this digital cloak.

1. Writes on C/C ++ without a unified vulnerability
Buffer Overflow? NEVER HeARD of it.
The super programmer in C ++ has no inconvenientized variables – they themselves are initialized from respect. He writes New Char [256], and the compiler silently adds a check of borders. Where others put a breakpoint – he glance. And the bug disappears.

2. Writes Fichs without bugs and testing
He does not need tests. His code tests himself at night when he sleeps (although … does he sleep?). Any line is a final stable version, immediately with the support of 12 languages ​​and the NASA Accessible level. And if the bug still came across, then the Universe is testing him.

3. It works faster than AI
While Chatgpt is printing “What a good question!”, The super programmer has already locked the new OS, ported it to the toaster and documented everything in Markdown with diagrams. He does not ask Stackoverflow – he supports him with his questions from the future. GPT is studying on his communities.

4. He understands someone else’s code better than the author
“Of course, I wrote it … But I do not understand how it works.” – An ordinary author.
“Oh, this is due to the recursive call in line 894, which is tied to the side effect in the REGEX filter. Smart.” – Super programmer without blinking.
He reads Perl on the first attempt, understands the abbreviations in the names of variables, and bugs captures by vibration of the cursor.

5. Writes the cross -platform code on the assembler
Why write on Rust, if possible on pure X86, ARM and RISC-V right away, with a flag “works everywhere”? He has his own table of the Oppodes. Even CPU thinks before spoiling his instructions. He does not optimize – he transcends.

6. He answers questions about the deadlines up to a second
“When will it be ready?”
“After 2 hours, 17 minutes and 8 seconds. And yes, this is taking into account the bugs, a smoke break and one philosophical question in the chat.”
If someone asks to do faster-he simply rebuilds the space-time through Make -jives.

7. Reversees and repairing proprietary frameworks
Proprietary SDK fell off, API without documentation, everything is encrypted by Base92 and coughs Segfault’s? For a super -programmer, this is an ordinary Tuesday. He will open a binary, inhale HEX, and an hour later there will be a patch with a fix, improvements of performance and added Dark Mode.

8. Designer and UX specialist for himself
UI comes out for him that people cry with beauty, and the buttons are guessed by intuition. Even cats cope – verified. He does not draw an interface – he opens his inner essence, like a sculptor in marble. Each press is delighted.

9. Conducts marketing research between commits
Between Git Push and Coffee Break, he manages to collect market analytics, build a sales funnel and rethink the monetization strategy. On weekends tests hypotheses. He has A/B tests are launched automatically when he opens a laptop.

10. repeats Microsoft alone
That for corporations 10 years and a thousand engineers, for him – Friday evening and good pizza. Windows 11? Did Windows 12. Office? Already there. Excel? He works on voice management and helps to plan a vacation. Everything works better and weighs less.

11. unfolds and supports infrastructure for 1 million users
His homemade NAS is a Kubernetes Clister. Monitoring? Grafana with memes. It unfolds the API faster than some manage to open Postman. He has everything documented, automated and reliably like a Soviet teapot.

12. Technical support is not required
Users do not complain about it. They just use it with reverence. FAQ? Not needed. Tutorials? Intuition will tell. He is the only developer who has the “Help” button to the gratitude page.

13. He does not sleep, does not eat, is not distracted
He feeds on caffeine and a pure desire to write a code. Instead of sleep, refactoring. Instead of eating – Debian Packages. His life cycle is a continuous development cycle. CI/CD is not Pipeline, this is a lifestyle.

14. communicates with customers without pain
“We need to make Uber, but only better, in two days.” – “Look: here is Roadmap, here are the risks, here is the MVP. And let us first decide on the goals.”
He knows how to say no “so that the customer replies:” Thank you, now I understand what I want. ”

15. instantly programs nuclear reactors
How much heat is released when the uranium nucleus is split? The super -programmer knows. And he knows how to steal it in Rust, C, Swift, even in Excel. Its reactor is not only safe – it is also updated by OTA.

16. has knowledge in all possible areas
Philosophy, physics, tax reporting of Mongolia – everything in his head. He participates in quizzes, where he is a leader. If he doesn’t know something, he simply temporarily turned off the memory to make room for new knowledge. Now it will return.

17. Knows all algorithms and design patterns
No need to explain to him how A*, Dijkstra or Singleton works. He came up with them. With him, the patterns behave correctly. Even antipattterns are corrected themselves – from shame.

18. worked in Apple, Google and left boredom
He was everywhere: Apple, Google, NASA, IKEA (tested the cabinet interface). Then I realized that it was already too good, and went to develop free open-source projects for pleasure. He does not need money because:

19. He has Pontid Bitcoin and he is Satoshi Nakamoto
Yes, it’s him. Just does not say. All wallets with millions of BTC are actually on his flash drive, walled up in concrete. In the meantime, he writes Backend for a farmer cooperative in the outback, because “it was interesting to try Kotlin Multiplatform.”

Conclusion: A little seriousness
In fact, programmers are ordinary people.
We are mistaken. We get tired. Sometimes we are so confident in ourselves that we do not see the obvious – and it is then that the most expensive mistakes in the history of it are made.

Therefore, it is worth remembering:

* It is impossible to know everything – but it is important to know where to look.
* Working in a team is not a weakness, but a path to a better decision.
* The tools that protect us are not “crutches”, but armor.
* Ask is normal. To doubt is right. To make mistakes is inevitable. Learning is necessary.
* Irony is our shield. The code is our weapon. Responsibility is our compass.

And legends about a super -programmer are a reminder that we all sometimes strive for the impossible. And this is precisely in this – real programming magic.

Why documentation is your best friend

(and how not to be a guru whose advice ceases to work after the update)

“Apps may only use public apis and must run on the currently shipping os.” Apple App Review Guidelines

If you have ever started working with a new framework and caught yourself thinking: “Now I’ll understand everything, documentation is for bores”-you are definitely not alone. Many developers have a natural instinct: first try, and only then – read. This is fine.

But it is at this stage that you can easily turn off the right path and find yourself in a situation where the code works … But only today, and only “I have.”

Why is it easy to “figure it out” – is it not enough?

Freimvorki, especially closed and proprietary, are complex and multi -layered. They have a lot of hidden logic, optimization and implementation features, which:

* not documented;
* not guaranteed;
* can change at any time;
* are a commercial secret and can be protected by patents
* contain bugs, flaws that are known only to the developers of the framework.

When you act “on a hunch”, you can easily build architecture in random observations, instead of support on the clearly described rules. This leads to the fact that the code becomes vulnerable to updates and EDGE cases.

Documentation is not a restriction, but support

The developers of frameworks create manuals for a reason – this is an agreement between you and them. While you are acting as part of the documentation, they promise:

* stability;
* support;
* predictable behavior.

If you go beyond this framework – everything that happens next becomes exclusively your responsibility.

Experiments? Certainly. But in the framework of the rules.
Curiosity is the developer’s super -via. Explore, try non -standard, test boundaries – all this is necessary. But there is an important “but”:

You need to experiment in the framework of the documentation and Best Practices.

Documentation is not a prison, but a card. She shows what opportunities are really planned and supported. It is such experiments that are not only useful, but also safe.

Caution: Guru

Sometimes you may encounter real “experts”:

* They conduct courses
* perform at conferences,
* write books and blogs,
* shared “their approach” to the framework.

But even if they sound convincing, it is important to remember:
If their approaches are contrary to documentation, they are unstable.

Such “empirical patterns” can:

* work only on a specific version of the framework;
* be vulnerable to updates;
* Break in unpredictable situations.

Guru is cool when they respect the manuals. Otherwise, their tips must be filtered through official documentation.

A little Solid

Three ideas from Solid principles are especially relevant here:

* Open/Closed Principle: Expand the behavior through a public API, do not go into the insides.
* Liskov Substition Principle: Do not rely on implementation, rely on the contract. Disorders – everything will break when replacing the implementation.
* Dependency Inversion: high -level modules should not depend on low -level modules. Both types should depend on abstractions. Abstraction should not depend on the details. Details should depend on abstractions.

What does this mean in practice? If you use a framework and directly tied to its internal details – you violate this principle.
Instead, you need to build dependence on public interfaces, protocols and contracts that the framework officially supports. This gives:

* the best isolation of your code from changes in the framework;
* the ability to easily test and replace dependencies;
* predictable behavior and stability of architecture.

When your code depends on the details, and not on abstractions, you literally embed yourself in a specific implementation that can disappear or change at any time.

And if the bug?

Sometimes it happens that you did everything right, but it works incorrectly. This happens – frameworks are not perfect. In this case:

* Gather a minimum reproduced example.
* Make sure you use only documented API.
* Send a bug-port-they will definitely understand you and, most likely, will help.

If the example is built on hacks or bypasses, the developers are not required to support it, and most likely your case will simply miss.

How to squeeze the maximum from the framework

* Read the documentation. Seriously.
* Follow the guides and recommendations from the authors.
* Experiment – but within the described.
* Check the tips (even the most famous speakers!) Through the manual.
* Fold bugs with minimal cases and respect for the contract.

Conclusion

Freimvorki are not black boxes, but tools that have the rules of use. To ignore them means writing the code “at random”. But we want our code to live for a long time, delight users, and does not break from the minor update.

So: trust, but check. And yes, read the manuals. They are your superpower.

Sources

https://developer.apple.com/app-store/review/guidelines/
https://en.wikipedia.org/wiki/SOLID
https://en.wikipedia.org/wiki/API
https://en.wikipedia.org/wiki/RTFM

Cube Art Project 2

Meet – Cube Art Project 2

The second version of the station editor, fully rewritten on pure JavaScript without Webassembly.
Light, fast and starts right in the browser – nothing more.

This is an experiment: cubes, color, freedom and a little meditative 3D geometry.
You can change colors using RGB-sloders, save and load scenes, move around space and just play.

Control:
– WASD – moving the camera
– Mouse – rotation
– Gui – color settings

Online:
https://demensdeum.com/software/cube-art-project-2/

Sources on Github:
https://github.com/demensdeum/cube-art-project-2

The project is written on pure JavaScript using Three.js.
Without frameworks, without collectors, without Webassembly – only Webgl, shaders and a little love for pixel geometry.

The scenes can be saved and loaded – create your worlds, save as JSON, share or return later to refinement.

Docker safety: Why is the launch of Root is a bad idea

Docker has become an indispensable tool in modern Devops and development. It allows you to isolate the encirclement, simplify the outfit and quickly scale applications. However, by default, Docker requires a ROOT, and this creates a potentially dangerous zone, which is often ignored in the early stages.

Why does Docker work from Root?

Docker uses the capabilities of the Linux: Cgroups, Namespaces, Iptables, Mount, Networking and other system functions. These operations are available only to the super -user.

That’s why:
* Dockerd demon starts from Root,
* Docker commands are transmitted to this demon.

This simplifies the work and gives full control over the system, but at the same time it opens up potential vulnerabilities.

Why is it dangerous: Container Breakout, CVE, RCE

Container Breakout

With weak insulation, an attacker can use Chroot or Pivot_root to enter the host.

Examples of real attacks:

* CVE-2019-5736-vulnerability to RUNC, allowed to rewrite the application and execute the code on the host.
* CVE-2021-3156-vulnerability to SUDO, allowed to get a ROOT inside the container and get out.

RCE (Remote Code Execution)

If the application in the container is vulnerable and starts from Root, RCE = full control over the host.

Rootless Docker: Solution of the problem

To minimize these risks, Rootless mode appeared in Docker. In this mode, both the demon and the containers are launched on behalf of the usual user, without any Root-privilegies. This means that even if an attacker receives control over the container, he will not be able to harm the host system.
There are restrictions: you can not use ports below 1024 (for example, 80 and 443), the –privileged mode, as well as some network modes, is not available. However, in most development scenarios and CI/CD Rootless Docker, it copes with its task and significantly increases the level of security.

Historically, launch from Root – Antipattern

From the very beginning, the principle of the smallest privileges has been applied in the Unix/Linux world. The fewer rights the process, the less harm it can do. Docker initially demanded a Root access, but today it is considered a potential threat.

Sources

https://docs.docker.com/engine/security/rootless/
https://rootlesscontaine.rs/

The non-obvious problem of Docker containers: hidden vulnerabilities

The non-obvious problem of Docker containers: hidden vulnerabilities

What is “Dependensky Hell” (DH)?

“Dependency Hell” (DH) is a term denoting a problem that arises when managing dependencies in the software. Its main reasons are in the conflict of versions, the difficulties of integrating various libraries and the need to maintain compatibility between them. DH includes the following aspects:

– Conflicts of versions: projects often require specific versions of libraries, and different components can depend on incompatible versions of the same library.
– Difficulties in updates: Dependencies updating can lead to unexpected errors or compatibility breakdown, even if a new version contains corrections or improvements.
– the surroundings: the desire to isolate and stabilize the environment led to the use of virtual environments, containerization and other solutions aimed at simplifying dependence management.

It is important to note that although the elimination of vulnerabilities is one of the reasons for the release of updated versions of the libraries, it is not the main driving force of DH. The main problem is that each change – whether it is correcting bugs, adding a new functionality or eliminating vulnerability – can cause a chain of dependencies that complicate the stable development and support of the application.

How did the fight against DH led to the creation of Docker?

In an attempt to solve the problems DH, the developers were looking for ways to create isolated and stable surroundings for applications. Docker was a response to this challenge. Containerization allows:

– isolate the environment: all dependencies and libraries are packaged along with the application, which guarantees stable work anywhere where Docker is installed.
– Simplify the deployment: the developer can once configure the environment and use it to deploy on any servers without additional settings.
– minimize conflicts: since each application works in its own container, the risk of conflicts between the dependencies of various projects is significantly reduced.

Thus, Docker proposed an effective solution to combat the DH problem, allowing developers to focus on the logic of the application, and not on the difficulties of setting up the environment.

The problem of outdated dependencies in doCker

Despite all the advantages of Docker, a new direction of problems has appeared – the obsolescence of addictions. This happens for several reasons:

1. The container freezes in time

When creating a Docker image, a certain state of all packages and libraries is fixed. Even if after assembly in the basic image (for example, `ubuntu: 04.20,` python: 3.9`, `node: 18-alpine`), vulnerabilities are found or new versions are produced, the container continues to work with the initially installed versions. If the image is not to be sent, the application can work with obsolete and potentially vulnerable components for years.

2. Lack of automatic updates

Unlike traditional servers, where you can configure automatic packages update through system managers (for example, `Apt Upgrade` or` NPM Update`), containers are not updated automatically. The update occurs only when re -electing the image, which requires discipline and regular control.

3. Fixed dependencies

To ensure stability, the developers often fix the version of dependencies in files like `redirements.txt` or` package.json`. This approach prevents unexpected changes, but at the same time freezes the state of dependencies, even if errors or vulnerability are subsequently detected in them.

4. Using obsolete basic images

The basic images selected for containers can also be outdated over time. For example, if the application is built on the image of `node: 16`, and the developers have already switched to` node: 18 ‘due to improvements and corrections, your environment will remain with an outdated version, even if everything works correctly inside the code.

How to avoid problems with outdated dependencies?

Include regular inspections for outdated dependencies and vulnerabilities in the CI/CD process:

– For Python:

pip list --outdated

– for node.js:

npm outdated

– Use tools to analyze vulnerabilities, for example, `trivy`:

trivy image my-app

Monitor the updates of the basic images

Subscribe to the updates of the basic images in Docker Hub or the corresponding repositories on GitHub in order to timely learn about critical corrections and updates.

Conclusion

The problem of dependency Hell arose not only because of the need to eliminate vulnerability, but also as a result of difficulties in managing and updating dependencies. Docker has proposed an effective solution to combat DH, providing isolated and stable surroundings for applications. However, with the advent of containerization, a new task arose – the need for regular renewal of images in order to prevent the obsolescence of dependencies and the appearance of critical vulnerability.

It is important for modern DevOPS specialists not only to solve the problems of conflicts of versions, but also to introduce regularly and automated control practices for the relevance of addictions so that the containers remain safe and effective.

Builder Pattern: Phased Creating an object in time

Introduction

The last article examined the general case of using the Builder pattern, but the option was not touched upon when the object is created in stages in time.
Builder pattern (builder) is a generating design template that allows you to gradually create complex objects. It is especially useful when the object has many parameters or various configurations. One of the interesting examples of its use is the ability to separate the process of creating an object in time.
Sometimes the object cannot be created immediately – its parameters can become known at different stages of the program.

An example on Python

In this example, the object of the car is created in stages: first, part of the data is loaded from the server, then the user enters the missing information.

import requests

def fetch_car_data():
    response = requests.get("https://api.example.com/car-info")
    return response.json()

builder = CarBuilder()

# Backend API data
car_data = fetch_car_data()
builder.set_model(car_data["model"])
builder.set_year(car_data["year"])

# User input
color = input("Car color: ")
builder.set_color(color)

gps_option = input("GPS feature? (yes/no): ").lower() == "yes"
builder.set_gps(gps_option)

car = builder.build()
print(car)

Imagine an API call, data entry occur in different parts of the application, or even in different libraries. Then the use of the Builder pattern becomes more obvious than in a simple example above.

Advantages

– the output is an immune structure that does not need to store optional data for temporary assembly
– The object is collected gradually
– avoiding complex designers
– The assembly code of the object is incomplinge only in one essence of Builder
– Convenience of understanding code

Sources

https://www.amazon.com/Design-Patterns-Object-Oriented-Addison-Wesley-Professional-ebook/dp/B000SEIBB8
https://demensdeum.com/blog/2019/09/23/builder-pattern/

Demensdeum Coding Challenge #1

Start Demensdeum Coding Challenge #1
Prize 100 USDT
1. We need to write Render pictures for Windows 11 64 bit
2. Render
https://demensdeum.com/logo/demens1.png
3. The picture should be integrated into the application
4. Graphic API – Direct3D or DirectDRAW
5. Wins that whose application turned out to be the smallest in size in bytes
6. The image should be 1 in 1 posely as original, save colors
7. Any languages/frameworks do not require additional installation -> so that you can start immediately from the application. For example, if the solution is only one Python script, then such a solution is not suitable. The installation of Python, Pygame and manually launch are required. Good example: Python Script wrapped together with Python and Pygame in EXE, which starts without additional installations.
8. Give in the form of a link to a public repository with the source code, instructions for assembling the application. Good example: a project with the assembly instructions at Visual Studio Community Edition

Deadline: June 1 Summing up the contest

Reference solution on ZIG + SDL3 + SDL3_Image:
https://github.com/demensdeum/DemensDeum-Coding-Challenge-1

Ghost Contacts

In the GhostContacts app, you can add contacts to the secret list, there is support for dark and bright topics, localization, export and imports of CSV contacts, an emergency password is supported to reset the list of contacts if the user suddenly requires a regular password for entering.

Application online:
https://demensdeum.com/software/ghost-contacts/

Github:
https://github.com/demensdeum/GhostContacts

Why I chose WordPress

When I thought about creating my own blog in 2015, I faced the question: which platform to choose? After much searching and comparison, I settled on WordPress. This was not a random choice, but the result of analyzing the platform’s capabilities, its advantages and disadvantages. Today, I would like to share my thoughts and experience using WordPress.

Advantages of WordPress

  • Ease of use
    One of the main reasons why I chose WordPress is its intuitive interface. Even if you have never worked with a CMS before, you can master WordPress in a matter of days.
  • A huge number of plugins
    WordPress provides access to thousands of free and paid plugins. These extensions allow you to add almost any functionality related to blogging, from SEO optimization to social media integration.
  • Scalability
    WordPress is great for blogs of all sizes. Having started with a simple personal blog, I know I can easily grow it by adding new features and functionality.
  • Wide selection of topics
    There are a huge number of free and paid themes available on WordPress that will allow you to create a pretty good looking blog in a short time. Creating a custom design will require the sensitive hand of a designer.
  • SEO-friendly
    WordPress is designed to be search engine friendly by default. Plugins like Yoast SEO make it easy to optimize your content to improve its search rankings.
  • Community and Support
    WordPress has one of the largest communities in the world. If you have a problem, you’ll almost certainly find a solution on forums or blogs dedicated to the platform.
  • Multilingual support
    Thanks to plugins like WPGlobus, I can blog in multiple languages, which is especially important when working with an audience from different countries.

Disadvantages of WordPress

  • Vulnerability to attacks
    WordPress’ popularity makes it a target for hackers. Without proper protection, your site can become a victim of attacks. However, regular updates and installing security plugins help minimize the risks.
  • Plugin Dependency
    Sometimes the functionality you want to add requires installing multiple plugins. This can slow down your blog and cause conflicts between extensions.
  • Performance Issues
    On large blogs, WordPress can start to slow down, especially if many plugins are used. To solve this problem, you need to optimize the database, implement caching, and use a more powerful hosting.
  • Cost of some functions
    While the basic version of WordPress is free, many professional themes and plugins cost money. Sometimes you have to invest to get all the features.

Conclusion

WordPress is a tool that provides the perfect balance between simplicity and power. For me, its advantages outweigh the disadvantages, especially considering the large number of solutions to overcome them. Thanks to WordPress, I was able to create a blog that perfectly suits my needs.

Wordex – speed reading program for iOS

I recently found a speed reading app that I would like to recommend to you.

Speed ​​reading is a skill that can greatly increase your productivity, improve your reading comprehension, and save you time. There are many apps on the market that promise to help you master this skill, but Wordex for iOS stands out among them. In this article, we will tell you what Wordex is, what features it has, who it is suitable for, and why it is worth considering.

What is Wordex?

Wordex is an iOS app designed specifically to develop speed reading skills. It helps users read texts faster, focus on key ideas, and avoid distractions. The program is based on scientific approaches and offers convenient tools to improve reading speed.

Main features of Wordex

  • Speed ​​reading mode: text is displayed in an optimized manner for quick comprehension. Users can adjust the speed of text display depending on their needs.
  • Progress Analysis: The program provides detailed statistics, including reading speed and improvement dynamics. This helps you evaluate your progress and adjust your approach to reading.
  • Text import: Wordex allows you to upload your own texts for practice. You can read articles, books or training materials directly in the application.
  • Intuitive interface: the application is designed in a minimalist style, which makes it easy to use. Even beginners will easily understand the functionality.


Wordex Screenshot 1

Who is Wordex suitable for?

Wordex is ideal for:

  • Students: who need to quickly read course materials and prepare for exams.
  • For businessmen and office workers: who want to process a large amount of information in a minimum amount of time.
  • For readers: who want to read more books and enjoy the process.


Wordex Screenshot 2

Advantages of Wordex

  • Mobility: you can exercise anywhere and anytime thanks to the app on your iPhone or iPad.
  • Personalization: the ability to customize the display of text to suit your needs.


Wordex Screenshot 3

Why try Wordex?

Wordex is not just a tool for learning speed reading. It is a program that develops concentration, expands vocabulary and increases productivity. Once you try Wordex, you will notice how reading stops being a routine and turns into an exciting activity.

Conclusion

If you want to learn speed reading or improve your existing skills, Wordex is a great choice. Easy to use and effective, the app will help you achieve your goals and save valuable time. Download Wordex from the App Store and start practicing today!

AppStore:
https://apps.apple.com/us/app/speed-reading-book-reader-app/id1462633104

Why is DRY important?

There are many articles on the topic of DRY, I recommend reading the original “The Pragmatic Programmer” by Andy Hunt and Dave Thomas. However, I still see many developers having questions about this principle in software development.

The DRY principle states that we must not repeat ourselves, this applies to both code and the processes we perform as programmers. An example of code that violates DRY:

class Client {
    public let name: String
    private var messages: [String] = []
    
    init(name: String) {
        self.name = name
    }
    
    func receive(_ message: String) {
        messages.append(message)
    }
}

class ClientController {
    func greet(client: Client?) {
        guard let client else {
            debugPrint("No client!")
            return
        }
        client.receive("Hello \(client.name)!")
    }

    func goodbye(client: Client?) {
        guard let client else {
            debugPrint("No client!!")
            return
        }
        client.receive("Bye \(client.name)!")
    }
}

As you can see, in the greet and goodbye methods, an optional instance of the Client class is passed, which then needs to be checked for nil, and then work with it can begin. To comply with the DRY method, you need to remove the repeated check for nil for the class instance. This can be implemented in many ways, one option is to pass the instance to the class constructor, after which the need for checks will disappear.

We maintain DRY by specializing ClientController on a single Client instance:

class Client {
    public let name: String
    private var messages: [String] = []
    
    init(name: String) {
        self.name = name
    }
    
    func receive(_ message: String) {
        messages.append(message)
    }
}

class ClientController {
    private let client: Client

    init(client: Client) {
        self.client = client
    }

    func greet() {
        client.receive("Hello \(client.name)!")
    }

    func goodbye() {
        client.receive("Bye \(client.name)!")
    }
}

DRY also concerns the processes that occur during software development. Let’s imagine a situation in which a team of developers has to release a release to the market themselves, distracting them from software development, this is also a violation of DRY. This situation is resolved by connecting a CI/CD pipeline, in which the release is released automatically, subject to certain conditions by the developers.

In general, DRY is about the absence of repetitions both in processes and in code, this is also important due to the presence of the human factor: code that contains less repetitive, noisy code is easier to check for errors; Automated processes do not allow people to make mistakes when performing them, because there is no human involved.

Steve Jobs had a saying, “A line of code you never have to write is a line of code you never have to debug.”

Sources

https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/
https://youtu.be/-msIEOGvTYM

I will help you with iOS development for Swift or Objective-C

I am happy to announce that I am now offering my services as an iOS developer on Fiverr. If you need help developing quality iOS apps or improving your existing projects, check out my profile:
https://www.fiverr.com/s/Q7x4kb6

I would be glad to have the opportunity to work on your project.
Email: demensdeum@gmail.com
Telegram: https://t.me/demensdeum

Dynamic Linking of Qt Applications on macOS

Today I have released a version of RaidenVideoRipper for Apple devices with macOS and M1/M2/M3/M4 processors (Apple Silicon). RaidenVideoRipper is a quick video editing application that allows you to cut a part of a video file into a new file. You can also make gif, export the audio track to mp3.

Below I will briefly describe what commands I used to do this. The theory of what is happening here, documentation of utilities, can be read at the following links:
https://www.unix.com/man-page/osx/1/otool/
https://www.unix.com/man-page/osx/1/install_name_tool/
https://llvm.org/docs/CommandGuide/llvm-nm.html
https://linux.die.net/man/1/file
https://www.unix.com/man-page/osx/8/SPCTL/
https://linux.die.net/man/1/chmod
https://linux.die.net/man/1/ls
https://man7.org/linux/man-pages/man7/xattr.7.html
https://doc.qt.io/qt-6/macos-deployment.html

First, install Qt on your macOS, also install the environment for Qt Desktop Development. After that, build your project, for example, in Qt Creator, then I will describe what is needed so that dependencies with external dynamic libraries work correctly when distributing the application to end users.

Create a Frameworks directory in the YOUR_APP.app/Contents folder of your application, put external dependencies in it. For example, this is what Frameworks looks like for the RaidenVideoRipper application:

Frameworks
├── DullahanFFmpeg.framework
│   ├── dullahan_ffmpeg.a
│   ├── libavcodec.60.dylib
│   ├── libavdevice.60.dylib
│   ├── libavfilter.9.dylib
│   ├── libavformat.60.dylib
│   ├── libavutil.58.dylib
│   ├── libpostproc.57.dylib
│   ├── libswresample.4.dylib
│   └── libswscale.7.dylib
├── QtCore.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtCore -> Versions/Current/QtCore
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtGui.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtGui -> Versions/Current/QtGui
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtMultimedia.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtMultimedia -> Versions/Current/QtMultimedia
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtMultimediaWidgets.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtMultimediaWidgets -> Versions/Current/QtMultimediaWidgets
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
├── QtNetwork.framework
│   ├── Headers -> Versions/Current/Headers
│   ├── QtNetwork -> Versions/Current/QtNetwork
│   ├── Resources -> Versions/Current/Resources
│   └── Versions
└── QtWidgets.framework
    ├── Headers -> Versions/Current/Headers
    ├── QtWidgets -> Versions/Current/QtWidgets
    ├── Resources -> Versions/Current/Resources
    └── Versions

For simplicity, I printed out only the second level of nesting.

Next, we print the current dynamic dependencies of your application:

otool -L RaidenVideoRipper 

Output for the RaidenVideoRipper binary, which is located in RaidenVideoRipper.app/Contents/MacOS:

RaidenVideoRipper:
	@rpath/DullahanFFmpeg.framework/dullahan_ffmpeg.a (compatibility version 0.0.0, current version 0.0.0)
	@rpath/QtMultimediaWidgets.framework/Versions/A/QtMultimediaWidgets (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtWidgets.framework/Versions/A/QtWidgets (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtMultimedia.framework/Versions/A/QtMultimedia (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtGui.framework/Versions/A/QtGui (compatibility version 6.0.0, current version 6.8.1)
	/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit (compatibility version 45.0.0, current version 2575.20.19)
	/System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO (compatibility version 1.0.0, current version 1.0.0)
	/System/Library/Frameworks/Metal.framework/Versions/A/Metal (compatibility version 1.0.0, current version 367.4.0)
	@rpath/QtNetwork.framework/Versions/A/QtNetwork (compatibility version 6.0.0, current version 6.8.1)
	@rpath/QtCore.framework/Versions/A/QtCore (compatibility version 6.0.0, current version 6.8.1)
	/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)
	/System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration (compatibility version 1.0.0, current version 1.0.0)
	/System/Library/Frameworks/UniformTypeIdentifiers.framework/Versions/A/UniformTypeIdentifiers (compatibility version 1.0.0, current version 709.0.0)
	/System/Library/Frameworks/AGL.framework/Versions/A/AGL (compatibility version 1.0.0, current version 1.0.0)
	/System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL (compatibility version 1.0.0, current version 1.0.0)
	/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1800.101.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1351.0.0)

As you can see in RaidenVideoRipper in dependencies Qt and dullahan_ffmpeg. Dullahan FFmpeg is a fork of FFmpeg that encapsulates its functionality in a dynamic library, with the ability to get the current progress of execution and cancel, using C procedures.
Next, replace the paths of the application and all necessary libraries using install_name_tool.

The command for this is:

install_name_tool -change old_path new_path target

Example of use:

install_name_tool -change /usr/local/lib/libavfilter.9.dylib @rpath/DullahanFFmpeg.framework/libavfilter.9.dylib dullahan_ffmpeg.a

After you have entered all the correct paths, the application should start correctly. Check that all paths to the libraries are relative, move the binary, and open it again.
If you see any error, check the paths via otool and change them again via install_name_tool.

There is also an error with dependency confusion, when the library you replaced does not have a symbol in the table, you can check the presence or absence of the symbol like this:

nm -gU path

Once executed, you will see the entire symbol table of the library or application.
It is also possible that you will copy dependencies of the wrong architecture, you can check this using file:

file path

The file utility will show you what architecture a library or application belongs to.

Also, Qt requires the presence of the Plugins folder in the Contents folder of your YOUR_APP.app directory, copy the plugins from Qt to Contents. Next, check the functionality of the application, after that you can start optimizing the Plugins folder by deleting items from this folder and testing the application.

macOS Security

Once you have copied all the dependencies and fixed the paths for dynamic linking, you will need to sign the application with the developer’s signature, and also send a version of the application to Apple for notarization.

If you don’t have $100 for a developer license or don’t want to sign anything, then write instructions for your users on how to launch the application.

This instruction also works for RaidenVideoRipper:

  • Disabling Gatekeeper: spctl –master-disable
  • Allow launch from any sources in Privacy & Security: Allow applications switch to Anywhere
  • Remove quarantine flag after downloading from zip or dmg application: xattr -d com.apple.quarantine app.dmg
  • Check that the quarantine flag (com.apple.quarantine) is missing: ls -l@ app.dmg
  • Add confirm the launch of the application if necessary in Privacy & Security

The error with the quarantine flag is usually reproduced by the error “The application is damaged” appearing on the user’s screen. In this case, you need to remove the quarantine flag from the metadata.

Link to RaidenVideoRipper build for Apple Silicon:
https://github.com/demensdeum/RaidenVideoRipper/releases/download/1.0.1.0/RaidenVideoRipper-1.0.1.0.dmg

Video stabilization with ffmpeg

If you want to stabilize your video and remove camera shake, the `ffmpeg` tool offers a powerful solution. Thanks to the built-in `vidstabdetect` and `vidstabtransform` filters, you can achieve professional results without using complex video editors.

Preparing for work

Before you begin, make sure your `ffmpeg` supports the `vidstab` library. On Linux, you can check this with the command:

bash  
ffmpeg -filters | grep vidstab  

If the library is not installed, you can add it:

sudo apt install ffmpeg libvidstab-dev  

Installation for macOS via brew:

brew install libvidstab
brew install ffmpeg

Now let’s move on to the process.

Step 1: Movement Analysis

First, you need to analyze the video motion and create a file with stabilization parameters.

ffmpeg -i input.mp4 -vf vidstabdetect=shakiness=10:accuracy=15 transfile=transforms.trf -f null -  

Parameters:

shakiness: The level of video shaking (default 5, can be increased to 10 for more severe cases).
accuracy: Analysis accuracy (default 15).
transfile: The name of the file to save the movement parameters.

Step 2: Applying Stabilization

Now you can apply stabilization using the transformation file:

ffmpeg -i input.mp4 -vf vidstabtransform=input=transforms.trf:zoom=5 output.mp4

Parameters:

input: Points to the file with transformation parameters (created in the first step).
zoom: Zoom factor to remove black edges (e.g. 5 – automatically zooms in until artifacts are removed).

Automatic code analysis with Bistr

If you need to analyze the source code of a project, but want to automate this process and use the local power of your computer, the Bistr utility can be a great solution. In this article, we will look at how this utility helps analyze code using the Ollama machine learning model.

What is Bistr?

Bistr is a source code analysis utility that allows you to integrate a local LLM (large language model) model such as Ollama to analyze and process code. With Bistr, you can analyze files in various programming languages ​​such as Python, C, Java, JavaScript, HTML, and more.

Bistr uses the model to check files against specific queries, such as finding an answer to a question about the functionality of the code or a part of it. This provides a structured analysis that helps in developing, testing, and maintaining projects.

How does Bistr work?

  • Load state: When you start an analysis, the utility checks whether the analysis state has been saved previously. This helps you continue where you left off without having to re-analyze the same files.
  • Code Analysis: Each file is analyzed using the Ollama model. The tool sends a request to the model to analyze a specific piece of code. The model returns information about the relevance of the code in response to the request, and also provides a textual explanation of why the given piece is relevant to the task.
  • State Preservation: After each file is parsed, the state is updated to continue with up-to-date information next time.
  • Results output: All analysis results can be exported to an HTML file, which contains a table with a rating of files by relevance, which helps to understand which parts of the code are most important for further analysis.

Installation and launch

To use Bistr, you need to install and run Ollama, a platform that provides LLM models, on your local machine. The Ollama installation instructions for macOS, Windows, and Linux are described below.

Download the latest version of Bistr from git:
https://github.com/demensdeum/Bistr/

After installing Ollama and Bistr, you can start code analysis. To do this, you need to prepare the source code and specify the path to the directory containing the files to be analyzed. The utility allows you to continue the analysis from where you left off, and also provides the ability to export the results in HTML format for easy further analysis.

Example command to run the analysis:


python bistr.py /path/to/code --model llama3.1:latest --output-html result.html --research "What is the purpose of this function?"

In this command:

–model specifies the model to be used for analysis.
–output-html specifies the path to save the analysis results in an HTML file.
–research allows you to ask a question that you want to answer by analyzing the code.

Benefits of using Bistr

  • Local execution: Analysis is performed on your computer without the need to connect to cloud services, which speeds up the process.
  • Flexibility: You can analyze code in different programming languages.
  • Automation: All code analysis work is automated, which saves time and effort, especially when working with large projects.

Local neural networks using ollama

If you wanted to run something like ChatGPT and you have a powerful enough computer, for example with an Nvidia RTX video card, then you can run the ollama project, which will allow you to use one of the ready-made LLM models on a local machine, absolutely free. ollama provides the ability to communicate with LLM models, like ChatGPT, and the latest version also announced the ability to read images, format output data in json format.

I also launched the project on a MacBook with an Apple M2 processor, and I know that the latest models of AMD video cards are supported.

To install on macOS, visit the ollama website:
https://ollama.com/download/mac

Click “Download for macOS”, you will download an archive of the form ollama-darwin.zip, inside the archive there will be Ollama.app which you need to copy to “Applications”. After that, launch Ollama.app, most likely the installation process will occur at the first launch. After that, you saw the ollama icon in the tray, the tray is on the right top next to the clock.

After that, launch a regular macOS terminal and type the command to download, install and launch any ollama model. The list of available models, descriptions, and their characteristics can be found on the ollama website:
https://ollama.com/search

Choose the model with the least number of parameters if it does not fit into your video card at startup.

For example, the commands to launch the llama3.1:latest model:


ollama run llama3.1:latest

Installation for Windows and Linux is generally similar, in one case there will be an ollama installer and further work with it through Powershell.
For Linux, the installation is done by a script, but I recommend using the version of your specific package manager. In Linux, ollama can also be launched through a regular bash terminal.

Sources
https://www.youtube.com/watch?v=Wjrdr0NU4Sk
https://ollama.com