InsideDarkWeb.com

How far to go when decoupling Microservices by use of Integration Events (Messages)?

I am reading the architecture guide from the .net core project. They state:

The integration events can be defined at the application level of each microservice, so they are decoupled from other microservices, in a way comparable to how ViewModels are defined in the server and client. What is not recommended is sharing a common integration events library across multiple microservices; doing that would be coupling those microservices with a single event definition data library. You do not want to do that for the same reasons that you do not want to share a common domain model across multiple microservices: microservices must be completely autonomous.

There are only a few kinds of libraries you should share across microservices. One is libraries that are final application blocks, like the Event Bus client API, as in eShopOnContainers. Another is libraries that constitute tools that could also be shared as NuGet components, like JSON serializers.

There is a reference implementation, the eShopOnContainers repo on Github. Digging around a little bit, I found that they duplicated the messages in both services. Example: the OrderPaymentSucceededIntegrationEvent appears in the publishing payment service as well as in the subscribing order service.

My feelings vary on this approach. Sure, it is decoupled in the sense of no compile time dependency. But any change of the message might break the application at runtime, since the compiler does not check the compatibility of the message sent matching the message received. Would it be illegal to publish a kind of "Contracts" assembly providing all the messages published by a micro service to be compile time bound by the subscriber? I’d rather think about such messages as "common knowledge", somehow like he base class library is common knowledge for all .net core programs.

Software Engineering Asked by Marc Wittke on November 11, 2021

1 Answers

One Answer

One of the greatest promise of microservices is that you will have independently deployable services.

There is a lot forces which fights against this so you should make wise decisions when you introduce anything which is shared amongst different services (a static xml file, a library, a database, etc..)

Whenever you have shared library (for example that one, which contains the contract's objects) you should consider that whenever you make a change how does it affect the services (the provider and its consumers).

  • Do I have to deploy only the provider because my change is backward compatible?
  • Do I have to deploy all of my consumers first and finally the provider because my change is not backward compatible?
  • Is it acceptable that different consumer services may use different versions of the package?

The proposed solution (to have duplicate contract objects) emphasises the independency of the services. More precisely it tries to loosen the coupling between them.

A common and great practise is to use Consumer-Driven Contract Testing. The big idea behind this is that each consumer of the provider gives a separate contract to the provider. ("Hey look, this how I use you. You can change in any direction until you can guarantee me this contract").

So instead of testing the provider's API we are testing that from the consumption point of views. With that in mind it makes more sense to separate the contract objects and let the service and its consumers evolve slightly more independently.

There is a really great package, called Pact.Net which can help you write these kind of tests.

Answered by Peter Csala on November 11, 2021

Add your own answers!

Related Questions

Ask a Question

Get help from others!

© 2021 InsideDarkWeb.com. All rights reserved.