Definition
In hexagonal architecture, an "entity" refers to a fundamental business object or domain model that represents a core concept within the application's domain. These entities encapsulate the business rules and logic, holding the state and behavior that are central to the application's functionality.
Characteristics of Entities in Hexagonal Architecture
-
Domain-Centric:
- Entities are at the heart of the domain model, representing key business concepts and rules.
- They embody the core business logic, independent of external systems or technologies.
-
Persistence Ignorance:
- Entities are designed without any knowledge of how they will be stored or retrieved from a database.
- This separation ensures that the business logic remains unaffected by changes in data storage mechanisms.
-
Rich Behavior:
- Entities contain methods that encapsulate the behaviors and rules related to the domain.
- They are not just data containers but also enforce invariants and business logic.
-
Identity:
- Each entity typically has a unique identifier, distinguishing it from other entities.
- This identity is crucial for tracking and managing the lifecycle of the entity within the application.
Role of Entities in Hexagonal Architecture
Entities play a crucial role in hexagonal architecture by providing a stable core that defines the application's business logic. They interact with other components such as:
-
Use Cases (Interactors):
- Use cases coordinate the application logic and interact with entities to perform specific operations.
- They represent application-specific business processes and workflows.
-
Adapters:
- Adapters handle communication between the core application and external systems (e.g., databases, web services, user interfaces).
- They convert data to and from the format required by the entities.
-
Ports:
- Ports define interfaces that decouple the core logic from the adapters.
- They specify the input/output operations without binding to specific technologies or implementations.
By isolating the core business logic within entities and ensuring their independence from external systems, hexagonal architecture enhances maintainability, testability, and flexibility.
How Torpedo works with the Entity
Torpedo generates an object class from your entity definition schema. This class will contain the schema fields as class attributes and let you interact with these via getter and setter methods.
Additionally, the next attributes are added to the class:
Field | Type | Description |
---|---|---|
id | string | It is required and is configured as UUID or ULID |
created | timestamp | It is autogenerated and handled by the CRUD operations |
updated | timestamp | It is autogenerated and handled by the CRUD operations |
Once that you have finished your entity schema and after generate your code, Torpedo will create 2 entity classes.
The EntityBase
and the Entity
. The first one is the class that contains the autogenerated code wit all CRUD operations
and the Query method.
The Entity
class will inherit from the EntityBase
letting you add custom logic at Entity
level.
Entity and EntityBase code generation
Each time that you run the code generator the EntityBase
class will be overwriten, however the
Entity
class will be keep avoiding overwite your code.
Sample: Author Entity
Having a Blog post website we define the post's author entity schema as:
Once that we have defined the Entity Schema and executed the torpedo command to generate the code. The outcome will be:
classDiagram
AuthorBase <|-- Author
AuthorBase : String id
AuthorBase : Int created
AuthorBase : Int updated
AuthorBase : String name
AuthorBase : String email
AuthorBase: +GetId()
AuthorBase: +GetCreated()
AuthorBase: +GetUpdated()
AuthorBase: +SetName(String name)
AuthorBase: +GetName()
AuthorBase: +SetEmail(String email)
AuthorBase: +GetEmail()
As developer
As a developer your own business logic MUST BE written into the Author class in order to avoid that Torpedo code generation tool overwrite your code!
We strongly recommend write your uses cases into the Service class and not as part of the Entity. The Entity object let you expands your data model adding new fields with a not supported data type by Torpedo.
Please read Adding Entity custom fields
Yaml definition
document root ($)
Field | Value | Description |
---|---|---|
version | torpedo.darksub.io/v1.0 | The version of the schema |
kind | entity | Means that the yaml describes an entity object |
spec | The spec encapsulate the entity schema |
$.spec
Field | Value | Description |
---|---|---|
name | string | The entity singular name |
plural | stirng | The entity plural name |
description | string | A brief description about the entity |
doc | string (path) | A file path to a markdown file with more documentation |
schema | string | The entity data schema |
$.spec.schema.reserved
Field | Value | Description |
---|---|---|
id | object | The entity model Id |
$.spec.schema.reserved.id
Field | Value | Description |
---|---|---|
type | string | The supported ID can be ulid or uuid |
UUID
Remember that UUID are time based but aren't lexicographically sorted, so cannot be used to query the entity with cursor pagination.
For further informartion please read Torpedo Query Language
$.spec.schema.fields
Field | Value | Description |
---|---|---|
name | string | The field name |
type | stirng | The field data type. |
description | string | A brief description about the field |
doc | string | An improved documentation used to complete APIs doc |
encrypted | bool | Set true if the field must be encrypted at storage (output) layer |
readonly | bool | Set true if the field can be set only on entity creation |
optional | string | Set it if the field is optional |
validate | string | Set the provided data validators |
type
The supported data type could be one of the following values:
string
: represents a string data type.integer
: represents the integer numbers and it is mapped toint64
float
: represents the float numbers and it is mapped tofloat64
date
: represents a timestamp number and it is mapped toint64
boolean
: represents a boolean value and it is mapped tobool
encrypted
Setting this field capability, the value will be encrypted at storage layer (Repository). Each time that a repository is created the encryption key must be provided.
AES key
The key argument should be the AES key, either 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256.
optional
The field could set as optional, so each time that the entity is created or updated the optional field can be omitted. However
instead of set a null
value Torpedo defines a default value that must be set as part of the yaml spec.
For instance:
- Each time that an entity is created and the
endDate
field is not set, it will be set as-1
validate
Torpedo provides built-in field data validation, which is executed each time the field is set.
The provided built-in validations are: list
, range
and regex
.
list
Validates the field value against a provided values list. For instance, this is useful when the field is a enumerator.
range
Validates the field value against a provided range, useful to validate range numbers.
regex
Validates the field value against a provided regular expression. A valid example could be an email validation.
$.spec.relationships
Relationships define how entities in a domain interact and share data. Understanding these relationships is fundamental for designing a normalized data model that reduces redundancy and ensures data integrity.
Field | Value | Description |
---|---|---|
name | string | The relationship name. It is only informative. |
type | stirng | So far only $rel is supported. |
ref | string (path) | The path to the referenced entity yaml file. |
cardinality | string | The entity relation: hasOne or hasMany |
load (optional) | object | Describe the load data type when a get method is called. |
Relationship example from blog/.torpedo/entities/post.yaml
load
Each time an entity is read, if the load
keyword has been set, the resulting entity will be populated with one (has one) or a list (has many) of linked entity/es.
Load types
So far only the load type nested
is supported, but soon eager
and lazy
will be added.
$.spec.adapters
Adapters are split in two well-defined groups: input
and output
$.spec.adapters.input
Inputs are managed through various components that interact with the core application logic. These inputs are typically handled by primary adapters that interface with external systems or users.
By managing inputs through primary adapters and ports, hexagonal architecture ensures that the core application logic remains decoupled from external systems and interfaces. This approach promotes flexibility, maintainability, and testability of the application.
As primary adapters Torpedo has a built-in REST API based on Gin Gonic. In the future more adapters will be added like GraphQL and others.
Field | Value | Description |
---|---|---|
type | stirng | The built-in input adapter. So far only http as REST API is supported |
metadata | object | The input configuration. This object will depends on the previous type value. |
http
The HTTP adapter provides a REST API based on Gin Gonic Http server. This adapter add the endpoinst to support entity CRUD operations and a query endpoint.
The entity name used to create the url is grabbed from the yaml path: $.spec.plural
, for instance, following the blog post example we could have:
Based on that example, the post
CRUD endpoint will be:
[POST|GET|PUT|DELETE] /api/v1/posts
Resource name
However, the http
adapter can be configured to set up a different url name, for instance:
[POST|GET|PUT|DELETE] /api/v1/blog-post
$.spec.adapters.output
Outputs are managed through secondary adapters that handle interactions with external systems, devices, or users. These outputs ensure that the core application logic can produce results or perform actions outside the application without being tightly coupled to the specifics of those external systems.
By managing outputs through secondary adapters and ports, hexagonal architecture ensures that the core application logic remains decoupled from external systems and interfaces. This approach promotes flexibility, maintainability, and testability of the application.
Field | Value | Description |
---|---|---|
type | stirng | The built-in output adapter. |
metadata | object | The output configuration. This object will depends on the previous type value. |
memory
A memory output refers to a type of secondary adapter that handles data storage and retrieval using in-memory structures rather than external databases or file systems. This can be particularly useful for caching, temporary data storage, or scenarios where persistence is not required.
By implementing a memory output as a secondary adapter, Torpedo maintains its principles of decoupling and flexibility, allowing the core application logic to remain independent of the storage mechanism used.
Torpedo Query Language
This adapter doesn't support TQL query
mongodb
A MongoDB output refers to a type of secondary adapter that handles data storage and retrieval using MongoDB, a popular NoSQL database. This adapter interacts with MongoDB to persist and retrieve data, allowing the core application logic to remain decoupled from the specifics of the database.
Characteristics of MongoDB Outputs
- Document-Oriented Storage:
- MongoDB stores data as documents in collections, providing a flexible schema. Suitable for applications that require handling semi-structured or unstructured data.
- Scalability:
- MongoDB supports horizontal scaling through sharding. Useful for applications with large datasets or high transaction volumes.
- Rich Query Capabilities:
- MongoDB provides powerful query capabilities, including filtering, aggregation, and indexing.
- Enables complex data retrieval operations.
Output adapter MongoDB sample
- The
collection
attribute let you configure the collection name.
Optional metadata
The metadata configuration is optional and if it has not been set defaults values will be used by Torpedo. For the collection name, the entity name will be used to set it up.
sql
A SQL output refers to a secondary adapter that handles data storage and retrieval using a relational database management system (RDBMS) such as MySQL, PostgreSQL, or SQLite. This adapter interacts with the SQL database to persist and retrieve data, allowing the core application logic to remain decoupled from the specifics of the database.
Characteristics of SQL Outputs
-
Structured Data:
- SQL databases use a structured schema defined by tables, columns, and data types.
- Suitable for applications with well-defined, structured data models.
-
ACID Compliance:
- SQL databases ensure atomicity, consistency, isolation, and durability (ACID) of transactions.
- Ideal for applications requiring strong data integrity and reliability.
-
Rich Query Capabilities:
- SQL provides powerful querying capabilities through its structured query language.
- Supports complex queries, joins, aggregations, and indexing.
Output adapter Memory sample
- The
table
attribute let you configure the table name.
Optional metadata
The metadata configuration is optional and if it has not been set defaults values will be used by Torpedo. For the table name, the entity name will be used to set it up.
redis
A Redis output refers to a type of secondary adapter that handles data storage and retrieval using Redis, an in-memory key-value store known for its speed and support for a variety of data structures. This adapter interacts with Redis to persist and retrieve data, allowing the core application logic to remain decoupled from the specifics of the storage mechanism.
Characteristics of Redis Outputs
- In-Memory Storage:
- Redis stores data in memory, which allows for extremely fast read and write operations.
- Suitable for caching, session management, real-time analytics, and other performance-critical applications.
- Data Structures:
- Redis supports various data structures such as strings, lists, sets, sorted sets, hashes, bitmaps, and hyperloglogs.
- Flexible for a wide range of use cases.
- Persistence Options:
- Although primarily an in-memory store, Redis supports persistence through snapshots (RDB) and append-only files (AOF).
- Provides options for durability based on application needs.
Output adapter Memory sample
- The
ttl
attribute let you configure the entity object Time To Live value into Redis cache. This is set inmilliseconds
.
Optional metadata
The metadata configuration is optional and if it has not been set defaults values will be used by Torpedo. For the TTL the default value is zero which means that object live forever or till a delete operation happens.
Torpedo Query Language
This adapter doesn't support TQL query
redis+mongodb
Combining Redis and MongoDB can leverage the strengths of both systems: Redis for its in-memory caching capabilities and MongoDB for its flexible, document-oriented persistent storage. This approach can help to improve the overall performance and scalability of the application.
Characteristics of Aggregation with Redis and MongoDB
-
Performance:
- Redis provides fast access to frequently accessed data, reducing the load on MongoDB.
- MongoDB serves as the primary data store, ensuring durability and flexible schema management.
-
Scalability:
- Redis handles high-throughput reads, improving the responsiveness of the application.
- MongoDB manages larger datasets and complex queries, scaling horizontally as needed.
-
Consistency:
- Ensure that the cache in Redis is kept consistent with the data in MongoDB.
- Implement strategies like cache expiration, cache invalidation, and write-through/write-behind caching to manage consistency.
Output adapter Memory sample
- This Adapter requires set the explicit mongodb and redis types with their own configurations.
redis+sql
Combining Redis and SQL databases in a hexagonal architecture setup can leverage the strengths of both systems: Redis for its in-memory caching capabilities and the SQL database for its structured, relational data storage with ACID compliance. This approach can improve the overall performance and scalability of the application, providing quick access to frequently accessed data while maintaining data integrity and reliability in the SQL database.
Characteristics of Aggregation with Redis and SQL
-
Performance:
- Redis provides fast access to frequently accessed data, reducing the load on the SQL database.
- SQL database serves as the primary data store, ensuring data integrity and supporting complex queries.
-
Scalability:
- Redis handles high-throughput reads, improving the responsiveness of the application.
- SQL database manages larger datasets, complex transactions, and relationships.
-
Consistency:
- Ensure that the cache in Redis is kept consistent with the data in the SQL database.
- Implement strategies like cache expiration, cache invalidation, and write-through/write-behind caching to manage consistency.
Output adapter Memory sample
- This Adapter requires set the explicit sql and redis types with their own configurations.