Kaban Board

KabanBoard is an open-source web application for managing tasks in Kanban format. The project is focused on simplicity, understandable architecture and the possibility of modification for the specific tasks of a team or an individual developer.

The solution is suitable for small projects, internal team processes, or as the basis for your own product without being tied to third-party SaaS services.

The project repository is available on GitHub:
https://github.com/demensdeum/KabanBoard

Main features

KabanBoard implements a basic and practical set of functions for working with Kanban boards.

  • Creating multiple boards for different projects
  • Column structure with task statuses
  • Task cards with the ability to edit and delete
  • Moving tasks between columns (drag & drop)
  • Color coding of cards
  • Dark interface theme

The functionality is not overloaded and is focused on everyday work with tasks.

Technologies used

The project is built on a common and understandable stack.

  • Frontend:Vue 3, Vite
  • Backend: Node.js, Express
  • Data storage: MongoDB

The client and server parts are separated, which simplifies the support and further development of the project.

Project deployment

To run locally, you will need a standard environment.

  • Node.js
  • MongoDB (locally or via cloud)

The project can be launched either in normal mode via npm or using Docker, which is convenient for quick deployment in a test or internal environment.

Practical application

KabanBoard can be used in different scenarios.

  • Internal task management tool
  • Basis for a custom Kanban solution
  • Training project for studying SPA architecture
  • Starting point for a pet project or portfolio

Conclusion

KabanBoard is a neat and practical solution for working with Kanban boards. The project does not pretend to replace large corporate systems, but is well suited for small teams, individual use and further development for specific tasks.

Gofis

Gofis is a lightweight command line tool for quickly searching files in the file system.
It is written in Go and makes heavy use of parallelism (goroutines), which makes it especially efficient
when working with large directories and projects.

The project is available on GitHub:
https://github.com/demensdeum/gofis

🧠 What is Gofis

Gofis is a CLI utility for searching files by name, extension or regular expression.
Unlike classic tools like find, gofis was originally designed
with an emphasis on speed, readable output, and parallel directory processing.

The project is distributed under the MIT license and can be freely used
for personal and commercial purposes.

⚙️ Key features

  • Parallel directory traversal using goroutines
  • Search by file name and regular expressions
  • Filtering by extensions
  • Ignoring heavy directories (.git, node_modules, vendor)
  • Human-readable output of file sizes
  • Minimal dependencies and fast build

🚀 Installation

Requires Go installed to work.

git clone https://github.com/demensdeum/gofis
cd gofis
go build -o gofis main.go

Once built, the binary can be used directly.

There is also a standalone version for modern versions of Windows on the releases page:
https://github.com/demensdeum/gofis/releases/

🔍 Examples of use

Search files by name:

./gofis -n "config" -e ".yaml" -p ./src

Quick positional search:

./gofis "main" "./projects" 50

Search using regular expression:

./gofis "^.*\.ini$" "/"

🧩 How it works

Gofis is based on Go’s competitive model:

  • Each directory is processed in a separate goroutine
  • Uses a semaphore to limit the number of active tasks
  • Channels are used to transmit search results

This approach allows efficient use of CPU resources
and significantly speeds up searching on large file trees.

👨‍💻 Who is Gofis suitable for?

  • Developers working with large repositories
  • DevOps and system administrators
  • Users who need a quick search from the terminal
  • For those learning the practical uses of concurrency in Go

📌 Conclusion

Gofis is a simple but effective tool that does one thing and does it well.
If you often search for files in large projects and value speed,
this CLI tool is definitely worth a look.

ollama-call

If you use Ollama and don’t want to write your own API wrapper every time,
the ollama_call project significantly simplifies the work.

This is a small Python library that allows you to send a request to a local LLM with one function
and immediately receive a response, including in JSON format.

Installation

pip install ollama-call

Why is it needed

  • minimal code for working with the model;
  • structured JSON response for further processing;
  • convenient for rapid prototypes and MVPs;
  • supports streaming output if necessary.

Use example

from ollama_call import ollama_call

response = ollama_call(
    user_prompt="Hello, how are you?",
    format="json",
    model="gemma3:12b"
)

print(response)

When it is especially useful

  • you write scripts or services on top of Ollama;
  • need a predictable response format;
  • there is no desire to connect heavy frameworks.

Total

ollama_call is a lightweight and clear wrapper for working with Ollama from Python.
A good choice if simplicity and quick results are important.

GitHub
https://github.com/demensdeum/ollama_call

SFAP: a modular framework for modern data acquisition and processing

In the context of the active development of automation and artificial intelligence, the task of effectively collecting,
Cleaning and transforming data becomes critical. Most solutions only close
separate stages of this process, requiring complex integration and support.

SFAP (Seek · Filter · Adapt · Publish) is an open-source project in Python,
which offers a holistic and extensible approach to processing data at all stages of its lifecycle:
from searching for sources to publishing the finished result.

What is SFAP

SFAP is an asynchronous framework built around a clear concept of a data processing pipeline.
Each stage is logically separate and can be independently expanded or replaced.

The project is based on the Chain of Responsibility architectural pattern, which provides:

  • pipeline configuration flexibility;
  • simple testing of individual stages;
  • scalability for high loads;
  • clean separation of responsibilities between components.

Main stages of the pipeline

Seek – data search

At this stage, data sources are discovered: web pages, APIs, file storages
or other information flows. SFAP makes it easy to connect new sources without changing
the rest of the system.

Filter – filtering

Filtering is designed to remove noise: irrelevant content, duplicates, technical elements
and low quality data. This is critical for subsequent processing steps.

Adapt – adaptation and processing

The adaptation stage is responsible for data transformation: normalization, structuring,
semantic processing and integration with AI models (including generative ones).

Publish – publication

At the final stage, the data is published in the target format: databases, APIs, files, external services
or content platforms. SFAP does not limit how the result is delivered.

Key features of the project

  • Asynchronous architecture based on asyncio
  • Modularity and extensibility
  • Support for complex processing pipelines
  • Ready for integration with AI/LLM solutions
  • Suitable for highly loaded systems

Practical use cases

  • Aggregation and analysis of news sources
  • Preparing datasets for machine learning
  • Automated content pipeline
  • Cleansing and normalizing large data streams
  • Integration of data from heterogeneous sources

Getting started with SFAP

All you need to get started is:

  1. Clone the project repository;
  2. Install Python dependencies;
  3. Define your own pipeline steps;
  4. Start an asynchronous data processing process.

The project is easily adapted to specific business tasks and can grow with the system,
without turning into a monolith.

Conclusion

SFAP is not just a parser or data collector, but a full-fledged framework for building
modern data-pipeline systems. It is suitable for developers and teams who care about
scalable, architecturally clean, and data-ready.
The project source code is available on GitHub:
https://github.com/demensdeum/SFAP

FlutDataStream

A Flutter app that converts any file into a sequence of machine-readable codes (QR and DataMatrix) for high-speed data streaming between devices.

Peculiarities
* Dual Encoding: Represents each data block as both a QR code and a DataMatrix code.
*High-speed streaming: Supports automatic switching interval up to 330ms.
* Smart Chunking: Automatically splits files into custom chunks (default: 512 bytes).
* Detailed Scanner: Read ASCII code in real time for debugging and instant feedback.
* Automatic recovery: Instantly recovers and saves files to your downloads directory.
* System Integration: Automatically opens the saved file using the default system application after completion.

https://github.com/demensdeum/FlutDataStream

Why can’t I fix the bug?

You spend hours working on the code, going through hypotheses, adjusting the conditions, but the bug is still reproduced. Sound familiar? This state of frustration is often called “ghost hunting.” The program seems to live its own life, ignoring your corrections.

One of the most common – and most annoying – reasons for this situation is looking for an error in completely the wrong place in the application.

The trap of “false symptoms”

When we see an error, our attention is drawn to the place where it “shot”. But in complex systems, where a bug occurs (crash or incorrect value) is only the end of a long chain of events. When you try to fix the ending, you are fighting the symptoms, not the disease.

This is where the flowchart concept comes in.

How it works in reality

Of course, it is not necessary to directly draw (draw) a flowchart on paper every time, but it is important to have it in your head or at hand as an architectural guide. A flowchart allows you to visualize the operation of an application as a tree of outcomes.

Without understanding this structure, the developer is often groping in the dark. Imagine the situation: you edit the logic in one condition branch, while the application (due to a certain set of parameters) goes to a completely different branch that you didn’t even think about.

Result: You spend hours on a “perfect” code fix in one part of the algorithm, which, of course, does nothing to fix the problem in another part of the algorithm where it actually fails.


Algorithm for defeating a bug

To stop beating on a closed door, you need to change your approach to diagnosis:

  • Find the state in the outcome tree:Before writing code, you need to determine exactly the path that the application has taken. At what point did logic take a wrong turn? What specific state (State) led to the problem?
  • Reproduction is 80% of success: This is usually done by testers and automated tests. If the bug is “floating”, development is involved in the process to jointly search for conditions.
  • Use as much information as possible: Logs, OS version, device parameters, connection type (Wi-Fi/5G) and even a specific telecom operator are important for localization.

“Photograph” of the moment of error

Ideally, to fix it, you need to get the full state of the application at the time the bug was reproduced. Interaction logs are also critically important: they show not only the final point, but also the entire user path (what actions preceded the failure). This helps to understand how to recreate a similar state again.

Future tip: If you encounter a complex case, add extended debug logging information to this section of code in case the situation happens again.


The problem of “elusive” states in the era of AI

In modern systems using LLM (Large Language Models), classical determinism (“one input, one output”) is often violated. You can pass exactly the same input data, but get a different result.

This happens due to the non-determinism of modern production systems:

  • GPU Parallelism: GPU floating point operations are not always associative. Due to parallel execution of threads, the order in which numbers are added may change slightly, which may affect the result.
  • GPU temperature and throttling: Execution speed and load distribution may depend on the physical state of the hardware. In huge models, these microscopic differences accumulate and can lead to the selection of a different token at the output.
  • Dynamic batching: In the cloud, your request is combined with others. Different batch sizes change the mathematics of calculations in the kernels.

Under such conditions, it becomes almost impossible to reproduce “that same state”. Only a statistical approach to testing can save you here.


When logic fails: Memory problems

If you are working with “unsafe” languages ​​(C or C++), the bug may occur due to Memory Corruption.

These are the most severe cases: an error in one module can “overwrite” data in another. This leads to completely inexplicable and isolated failures that cannot be traced using normal application logic.

How to protect yourself at the architectural level?

To avoid such “mystical” bugs, you should use modern approaches:

  • Multithreaded programming patterns:Clear synchronization eliminates race conditions.
  • Thread-safe languages: Tools that guarantee memory safety at compile time:
    • Rust: Ownership system eliminates memory errors.
    • Swift 6 Concurrency:Strong data isolation checks.
    • Erlang: Complete process isolation through the actor model.

Summary

Fixing a bug is not about writing new code, but about understanding how the old one works. Remember: you could be wasting time editing a branch that management doesn’t even touch. Record the state of the system, take into account the factor of AI non-determinism and choose safe tools.

Ferral

Ferral is a high-level, multi-paradigm programming language specifically designed for generating code from large language models (LLMs). While traditional languages ​​were designed with human ergonomics in mind, Ferral is optimized for how large language models (LLMs) reason, tokenize, and infer logic.

The name is spelled with two R’s, indicating a “reimagined” approach to the unpredictable nature of AI-generated code.

https://github.com/demensdeum/ferral