Event Storming

Home / Resources / Tutorial / Event Storming
Event Storming Tutorial - Sourced Group

Converting a traditional monolithic application into an event-driven serverless architecture is no mean feat. Event storming is a workshop-based approach that can overcome some of the challenges.

Transforming Monolithic Applications with Event Storming

Common issues with converting a traditional monolithic application include:

  • Broken integrations. It’s easy to miss integrations between microservices and these might only be discovered later when conducting end-to-end testing or even worse, by users after the service is live.
  • Creating a distributed monolith. There is a risk, especially for application teams that are less experienced in microservices, of creating a distributed monolith instead of proper microservices. In some cases, elements remain heavily dependent on each other or on shared code. Sometimes the scope of a microservice is too large, so it behaves more like a mini-monolith than a true event-driven microservice.
  • Incomplete services. Given the complexity and number of features in a typical monolith, it’s easy to miss functionality buried within it. Often this does not come to light until actual user tests are performed.
  • Duplicated services. It’s common to end up with more than one microservice performing the same or very similar tasks.

Event storming is a facilitated group modelling approach to Domain-Driven Design (DDD). It’s an effective way to deconstruct and transform monoliths, establishing clear requirements for the rebuild.

How Event Storming Works

Azure DevOps comprises a range of services covering the full development life-cycle. At the time of writing these are:

Typically, event storming uses business models to guide the architecture design process. This helps workshop participants explore and understand the inherent complexity of the target monolith. Business events, the relationships between them, and their triggers are identified. Then participants consider how all these moving parts work together to provide the full solution. Knowledge sharing and discussion are vital, so workshops and approaches are structured, but adaptable. It’s important that the process meets the specific needs of the organisation. 

Relevant domain experts need to engage with event storming because it’s unlikely that a single person will understand everything about a large and complex monolithic application. The ideal group size is around six to eight people, with each playing a different role in the project. These experts will have experienced the application from different viewpoints, and together should be able to provide a comprehensive understanding.

Event Storming Roles - Sourced Group

As the above diagram indicates, event storming involves multiple components. Every application has entities which interact with it. We describe these as ‘systems’ (applications, third-party services and other digital entities) and ‘actors’ (human users).

Actors and systems produce commands or instructions for the application to do something. In the above example, the user submits an order which results in the domain event ‘order submitted’.

A domain event results in artefacts being produced, which may be shown in a user interface (e.g. order ID). It may also trigger policies, which are branches in a workflow with different possible outcomes depending on the input.

For example, for a submitted order, we might want to validate that there are products in the order. The outcome of a policy could be to generate a new command or interact with a system.

Seven Steps of Event Storming

1. Identify the domain events.

All workshop participants contribute to this, describing individual events from the application workflows. They are written in the past tense and generally describe an action of some kind. The goal is to collect as many domain events as possible.

2. Sequence domain events to tell a story of workflows.

Domain events are sorted from left to right in chronological order, with any duplicates removed or merged. Alternate and parallel flows can be placed in a separate swim lane above or below the main workflow. There’s a good chance that missing events will come to light during the process and these can just be added to the board to complete the sequence. The outcome of this step is a structured overview of the entire application.

3. Identify the commands.

Triggers, or commands, responsible for creating each domain event are written with verbs in the present tense. For example, ‘customer submits order’. Remember that commands are given by an entity, either an actor or a system.

4. Identify the entities.

Actors may be customers, employees, or any other human that interacts with the application. Systems might include cloud services or an external payment service that interacts with the process. Systems can produce commands through scheduled events, callbacks, or when certain actions or rules trigger an event.

5. Introduce policies.

Applied after a domain event, policies use rules to determine the next step in your workflow. Examples of rules include performing validation on user input, making an if/else decision based on a particular variable, or taking action depending on whether the previous event was processed successfully or not.

One aspect to note here is that we are not yet looking at technical implementation, so we don’t need to consider all the checks and validations that might be needed. For example, when user input validation fails, we want to know the next step, but we don’t need to define the required parameters, values, or format that might cause a failure.

6. Identify artefacts and schemas from domain events.

Artefacts might include media files and documents, while schemas define the contents of database records. It’s important to determine the appropriate cloud service for storing that data type. Artefacts would likely be stored in a service such as S3 or Glacier, product inventory schemas may be a good match for a database such as DynamoDB, while financial transaction data that requires a high level of integrity could be stored in a Quantum Ledger Database.

7. Aggregate events.

This can be achieved by grouping the sticky notes within each workflow using nouns that define the ‘thing’ that the sticky note operates on. These aggregates are what your domain events are creating, reading, writing, and deleting. The below example from our video involves three aggregates: ‘order’, ‘payment’, and ‘delivery’.

Event Storming Aggregate Events - Sourced Group

Lastly, we create the ‘bounded context’ to represent the scope of the application. As the above diagram shows, we group related events together which can help remove dependencies between them. Later in the architecture, each boundary will often be represented by one or more microservices. Our example is quite simple, so the bounded context just follows the aggregates. In more complex applications, a bounded context may consist of more than one aggregate and aggregates might span across multiple boundaries.

Using Event Storming Outputs 

Cloud architects can use event storming outputs to design the event-driven microservices architecture: 

  • Bounded context can help identify where microservices are needed.
  • Policies define the business logic needed in services such as API Gateway. 
  • Commands indicate how we want to invoke microservices (e.g. via an API or using event triggers). 
  • Aggregates, schemas, and artefacts help define the databases and storage facilities we will need and the structure of the data within them. 
  • Actors help to define user roles and permissions and systems could highlight any external connectivity that we need to consider.

Struggling with application modernisation?

Our team can help with application assessment, Domain Driven Design, microservices, and more.

Book a Consultation