Depending on your language you can always find an automapping library. Be it AutoMapper, MapStruct or another contender. The attraction is clear:
- Reduce boilerplate
- Remove boring code
- Simplify mapping implementations
In reality though you end up with an implementation that only works in the simplest of cases, where an abstraction might not even have been necessary. When you have fought the library hard enough to finally get it to work, you end up with code that is incredibly resistant to changes. This post sums up my opinions on auto mapping, and why I view it as an antipattern (a pattern that provides negative benefits).
My experiences
Since I started viewing auto mapping as an antipattern, I have not used it in my last two main projects, which has spanned the last 3 years. During this time there has been a constant flow of people on- and offboarding the projects, and a common theme, especially among the better developers, is that they want to introduce a mapping library. They get annoyed that they have to spend time mapping because of the architectural decision and boundaries I enforce in my projects.
And I really get it.
The first time I encountered an onion model where layers were more strictly enforced, I had the same issues. This is where most of my experience with automapping comes from. The two libraries I have fought the hardest are AutoMapper in C# and MapStruct in Java.
I once spent an entire weekend fighting MapStruct in order to get it to properly initialise a deeply nested object that it kept initialising in an inconsistent way, causing a lot of NullPointerExceptions. Specifically, I was mapping from a request model into a domain aggregate entity that contained both subentities and value objects. With the entities, the entity should always be initialised with null fields, while with the value object it should only be initialised if the inner value was present. It turned out MapStruct supported both these methods, just not at the same time (only one NullValueMappingStrategy was possible at a time). I ended up just writing the mapping manually, which took less than an hour — and this was after 12 hours of trying to get the other approach to work.
This experience, along with other fights against the mapping library, is what caused me to have the opinions expressed in this blog post. I hope this explanation is clear enough to properly explain to the next awesome developer who challenges assumptions and tries to make their projects better, why I always shoot down this suggestion.
Reducing “boilerplate”
Boilerplate is annoying. It is always the same and very rarely changes. It takes focus away from the real problem areas in the code, and you have to spend extra time looking it over every time you are reviewing or debugging, to ensure it works properly.
This is very clear with subjects such as getters, setters and builders. They are always supposed to work in the same way, and it should be explicitly defined when they do something different. At least the getter and setter part is very well handled in languages like C#, where it is very explicitly defined when you change their standard behaviour. It is the same in Java, if you use Lombok to generate them for you, where you only have to define the ones that are non-standard. Even builders can be auto-generated by Lombok.
Some mappings tend to be the same way. You have two identical objects where you want to move the data from one type to the other. This is where auto mapping really shines — it completely removes the code. However, this scenario only happens when you have two objects that are conventionally the same. This is where I would ask: if the two objects are always the same, why not just use the same object?
The obvious architectural answer is that the objects are expected to change over time. For example, you want to maintain an old version of an API contract. And this is where my problem lies. Because if the mapping is expected to change over time and have custom logic inside it, then the code is not pure boilerplate. In this case, the expected outcome is not a 1-1 mapping.
Therefore, in all cases where the mapping is not a 1-1 mapping, convention-based mappers will of course help you move data that only varies in slight ways, such as converting camelCase to PascalCase. They always tend to have some extra logic to move fields, and this is where the automappers break down. They usually end up requiring the developer to define custom mappings in ways that you then have to learn and understand in order to follow the system.
This is where the trouble begins. When the logic for moving fields is custom logic expressed in the language of a mapping framework, it is not obvious to the developer.
When you end up having to specify this mapping anyway, why not just do it in plain code instead of trying to express it in a mapping library’s language?
Removing boring code
Writing boring code can be incredibly frustrating, especially when you know exactly how it is supposed to work and the rest is just tedious typing.
I understand the argument — it is not stimulating to write it out.
However, boring code is usually what you want. It is easy to understand and easy to change, because it is explicit. So sometimes you just have to bite the bullet and write the boring code. It might make the current day more tedious, but it also saves a lot of future headaches when the code has to be revisited.
If the main problem is the annoying part of typing it out, why not just ask an LLM to do it? It can easily write this simple code if you give it the two objects, and you can then always change the parts that are non-trivial afterwards.
Simplifying mapping implementation
I once reviewed some code from an incredible developer I worked with. The code was something along the following:
public class CreateUserRequestMapper {
public CreateUserCommand toCommand(CreateUserRequest request) {
return CreateUserCommand.builder()
.email(request.getEmail())
.password(request.getPassword())
.firstName(request.getFirstName())
.lastName(request.getLastName())
.build();
}
}
It was two simple objects with a Lombok builder and getters used for boilerplate.
I said: “Could we use a mapping library to do it instead?”
The response was: “This is the most simple code I have written all day. Why should we spend any more time on it?”
And this has resonated with me ever since. Why even bother optimizing the simple parts when we have so much complexity in other places?
If you do experience complexity in the mapping logic, this is usually because business logic has been sneaked into the mapping. This should be avoided. Mapping is for simple transformations such as:
- Switching naming conventions
- Switching types, for example
Integer->String - Enforcing nullability
- Trimming
Arguably, some of this logic might even be better handled in the deeper layers, where you can embed it in objects using:
- Explicit constructors
- Static factory methods
- Dedicated construction functions
By doing this, it becomes easier to push the business logic down to the lower layers of the model and allow the mapping to stay simple.
In cases where you want to directly pass your lower-level layers the request model, you can deploy dependency inversion, where you define a contract interface in the domain that is implemented by the request. This is a quick and clean way to “map” into an inner layer.
More downsides
While this post explains the major arguments for automapping and why they never actually materialise, it does not cover all the downsides. There are more downsides to auto mapping that should not be forgotten:
- Runtime errors. While libraries usually provide runtime errors, the type safety in manually written code typically catches breaking changes at compile time.
- Debuggability. Handwritten mapping code is a lot easier to debug than auto mapping library logic, especially when something eventually goes wrong.
- Performance. Hand-rolled code is fastest, or at least easier to optimise. This argument usually loses weight because the performance boost is so small you will almost never notice it in a real application.
- Dependency on an external library. All dependencies should be viewed with suspicion, as they introduce risk and contribute to dependency hell.
- This argument is especially strong in light of AutoMapper’s move from being a free, open-source library to a subscription model. This dependency — which I argue is unnecessary — has even caused direct financial loss in some projects.
Conclusion
Trying to make your project better is a noble goal. At this point I am simply very confident that automapping will not make anything better in the long run. This is why I view it as an antipattern. Do not get tempted.