All Items All Items
Smaller Aggregates are better
Aggregates should be designed to be small because of the following aspects:
- Memory Concern: Aggregates are Cohesive Wholes, so Aggregates are loaded in entirety. The smaller the Aggregate is, the faster it loads into memory. Consider an
Account
Aggregate with an embeddedTransactions
Entity. If the domain is a Mortgage management system, the number of transactions under each account is expected to be less, so the design works fine. But if the Account is a Checking Account in the Banking domain, the number of transactions may accumulate to large numbers over time. In the latter case, it makes sense to manage Transactions as a separate Aggregate, with linkage to the Account through an AccountId Value Object. - Transaction Concern: Aggregates are Transaction Boundaries, so all state changes within an Aggregate are persisted together.
- If the application uses a database that supports atomic transactions (like an RDBMS), it may lock records for the duration of the update. If the Aggregate is too large, the chances of data deadlocks - because multiple processes are updating the same data - are increased.
- With databases that don't support transactional guarantees, we implement Aggregate record version checks before persistence. Large Aggregates may mean that records are overwritten accidentally, because multiple updates may happen between checking the version number and eventually committing to the database.
- Reasonability Concern: Smaller Aggregates are more natural to understand and reason about, so Developers may find it easier to work with them. The better the understanding of data and behavior, the fewer the bugs and unexpected edge cases in functionality.
- Consistency Concern: Aggregates are responsible for ensuring that all invariants within them are satisfied. Smaller Aggregates tend to have lesser number of business invariants to satisfy, so are easy to maintain and manage over a Product's lifecycle.
- Responsibility Concern: Each Aggregate should be responsible over a single part of the domain functionality. A Large Aggregate is a good indicator that the Aggregate is handling too many things, or addressing too many concerns, in the application.
- Testability Concern: All Aggregate operations that change the state of the Aggregate need to be tested thoroughly. Large and complex Aggregates are harder to validate for behavior and data consistencies.
The size of the aggregate is largely determined by:
- the number of entities it needs to enclose to satisfy all invariants
- the transaction volume to minimize chances of concurrency conflicts
To reduce the size of an Aggregate:
- Convert its entities into Aggregates of their own and have them refer to the parent Aggregate by its identifier.
- Ensure that all related transactions modify only one Aggregate instance at a time. Use Eventual Consistency to update other Aggregates over time.