With the development of large language models (LLM), such as ChatGPT, more and more developers use them to generate code, design architecture and accelerate integration. However, with practical application, it becomes noticeable: the classical principles of architecture – Solid, Dry, Clean – get along poorly with the peculiarities of the LLM codgendation.
This does not mean that the principles are outdated – on the contrary, they work perfectly with manual development. But with LLM the approach has to be adapted.
Why llm cannot cope with architectural principles
encapsulation h2>
Incapsulation requires understanding the boundaries between parts of the system, knowledge about the intentions of the developer, as well as follow strict access restrictions. LLM often simplifies the structure, make fields public for no reason or duplicate the implementation. This makes the code more vulnerable to errors and violates the architectural boundaries.
Abstracts and interfaces h2>
Design patterns, such as an abstract factory or strategy, require a holistic view of the system and understanding its dynamics. Models can create an interface without a clear purpose without ensuring its implementation, or violate the connection between layers. The result is an excess or non -functional architecture.
Dry (Donolt Repeat Yourself) h2>
LLM do not seek to minimize the repeating code – on the contrary, it is easier for them to duplicate blocks than to make general logic. Although they can offer refactoring on request, by default models tend to generate “self -sufficient” fragments, even if it leads to redundancy.
Clean Architecture h2>
Clean implies a strict hierarchy, independence from frameworks, directed dependence and minimal connectedness between layers. The generation of such a structure requires a global understanding of the system – and LLM work at the level of probability of words, not architectural integrity. Therefore, the code is mixed, with violation of the directions of dependence and a simplified division into levels.
What works better when working with LLM h2>
Wet instead of Dry
The WET (Write EVERYTHING TWICE) approach is more practical in working with LLM. Duplication of code does not require context from the model of retention, which means that the result is predictable and is easier to correctly correct. It also reduces the likelihood of non -obvious connections and bugs.
In addition, duplication helps to compensate for the short memory of the model: if a certain fragment of logic is found in several places, LLM is more likely to take it into account with further generation. This simplifies accompaniment and increases resistance to “forgetting”.
Simple structures instead of encapsulation h2>
Avoiding complex encapsulation and relying on the direct transmission of data between the parts of the code, you can greatly simplify both generation and debugging. This is especially true with a quick iterative development or creation of MVP.
Simplified architecture h2>
A simple, flat structure of the project with a minimum amount of dependencies and abstractions gives a more stable result during generation. The model adapts such a code easier and less often violates the expected connections between the components.
SDK integration – manually reliable
Most language models are trained on outdated versions of documentation. Therefore, when generating instructions for installing SDK, errors often appear: outdated commands, irrelevant parameters or links to inaccessible resources. Practice shows: it is best to use official documentation and manual tuning, leaving LLM an auxiliary role – for example, generating a template code or adaptation of configurations.
Why are the principles still work – but with manual development h2>
It is important to understand that the difficulties from Solid, Dry and Clean concern the codhegeneration through LLM. When the developer writes the code manually, these principles continue to demonstrate their value: they reduce connectedness, simplify support, increase the readability and flexibility of the project.
This is due to the fact that human thinking is prone to generalization. We are looking for patterns, we bring repeating logic into individual entities, create patterns. Probably, this behavior has evolutionary roots: reducing the amount of information saves cognitive resources.
LLM act differently: they do not experience loads from the volume of data and do not strive for savings. On the contrary, it is easier for them to work with duplicate, fragmented information than to build and maintain complex abstractions. That is why it is easier for them to cope with the code without encapsulation, with repeating structures and minimal architectural severity.
Conclusion h2>
Large language models are a useful tool in development, especially in the early stages or when creating an auxiliary code. But it is important to adapt the approach to them: to simplify the architecture, limit abstraction, avoid complex dependencies and not rely on them when configuring SDK.
The principles of Solid, Dry and Clean are still relevant-but they give the best effect in the hands of a person. When working with LLM, it is reasonable to use a simplified, practical style that allows you to get a reliable and understandable code that is easy to finalize manually. And where LLM forgets – duplication of code helps him to remember.