Oliver's Blog

June 2017 - Posts

  • DevExtreme - Real World Patterns - Event Sourcing Implementation

    This post is part of a series describing a demo project that employs various real-world patterns and tools to provide access to data in a MongoDB database for the DevExtreme grid widgets. You can find the introduction and overview to the post series by following this link.

    Continuing from my first post about the Event Sourcing branch of the demo project, here are some details about my implementation. For quick reference, here’s the link to the branch again.

    Implementation

    As you can see from the diagrams above, there are several changes to the original structure to incorporate the new architecture and functionality. I will describe first how a new row ends up being persisted in the read model.

    Using RabbitMQ

    As described in a previous post, my demo application uses Seneca for communication purposes. In the original branch of the demo, I configured Seneca in each service to communicate directly with each other service. While not the most common thing to do in a real-world application, this approach worked just fine with the demo setup.

    To support the event features of the new architecture, it is now mandatory to have a communications channels that supports broadcasting events. Seneca itself doesn’t do this, but it has plugable transport channels, some of which support event broadcasting. I configured Seneca to use RabbitMQ with the help of seneca-amqp-transport. Here’s an example of the changed Seneca startup code (from query-service/index.js):

    seneca
      .use('seneca-amqp-transport')
      .use('query-values')
      .use('query-events')
      .listen({
        type: 'amqp',
        hostname: process.env.RABBITMQ_HOST || 'rabbitmq',
        port: parseInt(process.env.RABBITMQ_PORT) || 5672,
        pin: 'role:entitiesQuery',
        socketOptions: {
          noDelay: true
        }
      });
    

    In my orchestrated service setup, the RabbitMQ service is spun up together with all the others in docker-compose.yml. If you compare the details of this file against the master branch version, you will find that things are actually much simpler now that services only depend on rabbitmq (and sometimes mongo), but not on each other.

    The command service

    When a new row is created in the front-end, the command service receives a message from the web-proxy. In my new implementation, I have replaced the old command service completely, with code that utilizes the library reSolve. This is a project in development by one of our teams at this time, and I expect to be able to blog more about this in the future – right now there isn’t any public information available.

    As the diagram shows, the most important aspect of the command service is that it raises domain events that can be handled by other services. The structure of the events is determined by the simple return statements at the ends of the create and update command declarations (from command-service/index.js):

    ...
    return {
      type: 'created',
      payload: args
    };
    
    ...
    
    return {
      type: 'updated',
      payload: args
    };
    

    In addition to this, the command service performs very limited local state handling, to enable it to detect whether entities exist already. The event handler for the created event receives the domain event and modifies the local state to reflect the fact that the entity with the given id exists now:

    eventHandlers: {
      created: (state, args) => ({
        exists: true,
        id: args.aggregateId
      })
    }
    

    There is also a very basic validation implementation for the create command, simply to make sure that all relevant information is actually supplied:

    if (
      !args.aggregateId ||
      !args.data.date1 ||
      !args.data.date2 ||
      !args.data.int1 ||
      !args.data.int2 ||
      !args.data.string
    )
      throw new Error(
        `Can't create incomplete aggregate: ${JSON.stringify(args)}`
      );
    

    In the case of the demo application, commands can only be sent by specific components of my own system. As such, the validation performed on this level is only a safety net against my own accidental misuse. Business level validation is performed at other points, as I already mentioned in the diagram description above.

    The command service is configured to receive commands through Seneca (based on RabbitMQ, see above), and it also publishes events through Seneca. For the latter purpose, I created a bus implementation for reSolve, which you can find in resolve-bus-seneca/index.js. The main important detail is that the RabbitMQ exchange used by the bus is configured as a fanout type, which makes it possible for multiple clients to receive messages through that exchange.

    I may decide in the future to publish this bus as an npm package, but at this time it is part of the demo codebase.

    The readmodel service

    This service is new to the project, and its purpose is to receive domain events through the bus and react by creating and maintaining a persistent representation of the entities in the system. As an example, here is the event handler that creates a new instance (from events.js):

    this.add('role: event, aggregateName: entity, eventName: created', (m, r) => {
      m = fixObject(m);
    
      const newObject = m.event.payload.data;
      newObject._id = m.event.payload.data.id;
      db(db =>
        db.collection(m.aggregateName).insertOne(newObject, err => {
          if (err) console.error('Error persisting new entity: ', err);
          r();
        })
      );
    });
    

    Query tracking

    The second part of the new functionality is the change notification feature. The following steps are performed for this purpose:

    1. The front-end runs a data query, as before. However, now it passes a parameter that tells the web-proxy to track the query.
    2. The web-proxy sends a message to the query-change-detector to register the query for tracking.
    3. Data is queried via the query-service and returned to the front-end. An ID value for the tracked query is also returned.
    4. With the ID value, the front-end opens a socket.io connection to the web-proxy, which registers the client and connects it with the tracked query by means of the ID.
    5. At a later point, if the command-service raises a domain event, the query-change-detector receives this and runs tracked queries to detect changes.
    6. If changes are found, a change notification message is sent to the web-proxy.
    7. The web-proxy uses the open socket.io connection to notify the front-end of the changes.
    8. The front-end handles the change notification by applying changes to the visible grid.

    Here is another flowchart outlining the steps:

    Query Tracking

    The step of registering the query is very simple. As part of its basic query processing, the web-proxy processes the query parameters passed from the front-end and generates a message to send to the query-service. To register the query for change tracking, that same message object is passed on to the query-change-tracker. You can see this towards the end of the listValues function in proxy.js, and the other side in querychanges.js.

    Socket.io is a JavaScript library that facilitates bidirectional communication (similar to SignalR for ASP.NET projects). The web-proxy accepts socket.io client connections in sockets.js, and the startup code of the service has been modified to spin up socket.io (from web-proxy/index.js):

    const express = seneca.export('web/context')();
    const server = http.Server(express);
    const io = socketIo(server);
    require('./sockets')(seneca, io, liveClients);
    
    const port = process.env.WEBPROXY_PORT || 3000;
    
    server.listen(port, () => {
      console.log('Web Proxy running on port ' + port);
    });
    

    As part of the query logic in dataStore.js, the front-end application connects to the web-proxy using socket.io:

    if (params.live && res.liveId) {
      var socket = io.connect(dataStoreOptions.socketIoUrl);
      socket.on('hello', function(args, reply) {
        socket.on('registered', function() {
          store.registerSocket(res.liveId, socket);
          socket.on('querychange', function(changeInfo) {
            dataStoreOptions.changeNotification(changeInfo);
          });
        });
    
        reply({
          liveId: res.liveId
        });
      });
    }
    

    In query-change-detector/events.js, you can see the handling of incoming domain events. To prevent any delays, events are accumulated in a queue, grouped by entities they pertain to. In a background loop, the query-change-detector then handles the events, possibly re-runs the queries it is tracking, and fires events of its own when changes are detected. There is some handling to prevent overly large numbers of change notifications from being sent in case of bursts of domain events, and various edge case checks depending on the types of the queries.

    When the change notification is received by the web-proxy, it uses the existing socket.io connection to send the notification to the front-end (from queryChanges.js):

    if (liveClients.hasId(m.queryId)) {
      const socket = liveClients.getSocket(m.queryId);
      if (socket) {
        socket.emit('querychange', {
          liveId: m.queryId,
          batchUpdate: m.batchUpdate,
          events: m.events
        });
        ...
    } ...
    

    Finally, the front-end client receives the change notification. In changeNotification.js, you can see the code I wrote to apply changes to the Data Grid (function trackGridChanges) depending on the notification. I attempt to merge the changes into the current view of the grid as efficiently as possible.

    In contrast, the function trackPivotGridChanges is very much shorter and represents the minimum required implementation by simply reloading the grid – unfortunately the Pivot Grid does not, at this point, support similar granular techniques as the Data Grid.

    Try it!

    This concludes my description of the new branch. Please try it for yourself and don’t hesitate to get in touch (here or via email) if you have any problems, questions or suggestions!

  • DevExtreme - Real World Patterns - Event Sourcing Architecture

    This post is part of a series describing a demo project that employs various real-world patterns and tools to provide access to data in a MongoDB database for the DevExtreme grid widgets. You can find the introduction and overview to the post series by following this link.

    I have created another new branch for the demo that implements Event Sourcing on top of the CQRS pattern that was part of the concept from the beginning. You can access the branch by following this link.

    Event Sourcing

    The idea of the Event Sourcing pattern is to store actions, or events, instead of data that changes in-place over time. Imagine you didn’t have a database that contained the current state of all your business data. With Event Sourcing, you (may) only have a log of all data-relevant actions that ever occurred in your system. This log can only be appended to, and by replaying the actions in the log, you can arrive at the current state of an entity - or any other state the entity had at any point in time.

    Note that the terminology used by different authors to describe entities is somewhat ambiguous. Sometimes such entities may exist in memory in the shape of domain objects, and this term is often used. The term aggregate is also frequently used to capture the idea that some data structure aggregates information from events flowing through the system.

    At this point of my description, entities exist only as in-memory data that may be kept by the Event Sourcing system to reflect the current state and to avoid having to regenerate entities when new events arrive. Sometimes, snapshots may be used to persist state at certain points of the event log. This can be useful to save time when the system is restarted, because then the number of events that need to be replayed to arrive at the current state is smaller.

    The final item I’ll mention here (and I’m not even making use of that in my demo) is the projection. Event Sourcing systems support projection definitions, which facilitate queries - they represent another structure of in-memory data whose shape is defined according to specific query requirements, and which are maintained continuously as events are triggered.

    For more general information on Event Sourcing I recommend you read Martin Fowler’s article as well as this description on Microsoft’s website.

    Using a read model

    For the querying requirements of my demo application, the projection technique is not a good choice. Since queries depend on user interaction and are fully dynamic, it would be impossible to maintain projections for all query option combinations. Theoretically, a projection could be created dynamically when a query is run, but that would mean replaying the event log for each query. This would not result in satisfactory performance.

    Instead, I decided to generate a persistent read model for the queryable data. This is a common approach for various scenarios. In a real-world application, you could choose which parts of your data require read models, and these models could take shapes that are specifically adapted to the requirements of the queries you anticipate to run against them. In my simple case, I decided to persist the read model in the same structure I was previously using for my persistent data, so that my querying logic would run against it without change.

    Architecture

    To visualize the changes in the system, I created a flowchart of the process that begins with the user creating a new row in the front-end application. Here’s the straight-forward implementation that was used in the original branch of the demo:

    Creating a new row

    The arrows in the image denote messages being sent from one service to the other, and it is a linear process that leads from the front-end to the persistent storage.

    In comparison, here is the flowchart for the new implementation:

    Creating a new row with Event Sourcing

    This time, the command service doesn’t contact the mongo service directly. Instead, it raises a domain event (events are denoted by the dashed arrow lines), which is handled both by the readmodel service and the query-change-detector. Through the readmodel service, the data is persisted as before (again, my decision at this point to keep the data structure the same).

    On the basis of the domain events raised in response to commands, there is now an easy way of tracking changes in the system. I decided to utilize this to provide query change tracking for the front-end application. The query-change-detector monitors the domain events and checks whether they influence a query that has been registered with the service (this happens at an earlier stage, when the query is first run). In that case, the service sends a message to the web-proxy (this is logically an event message, denoted as such), which forwards the information to the front-end.

    Note that both diagrams skip a step made by the web-proxy, where the new data is sent to the validation service before anything else happens. This step is not relevant to the discussion in this post.

    Implementation

    For more details about the implementation of the architecture I described above, please see this follow-up post.

  • DevExtreme - Real World Patterns - ASP.NET Core MVC front-end

    This post is part of a series describing a demo project that employs various real-world patterns and tools to provide access to data in a MongoDB database for the DevExtreme grid widgets. You can find the introduction and overview to the post series by following this link.

    I have created a branch that uses ASP.NET Core MVC and our controls for that platform for the frontend application of my demo project. Follow this link to access the branch. Please pay attention to the Prerequisites section below if you want to try this branch for yourself.

    Prerequisites

    The general instructions outlined in the README still apply for this branch. There is a caveat with regard to the runtime environment: I have not checked into options for live-debugging of the .NET Core based application, and I’m also not mounting the project directories into the running Docker container for this application, since this resulted in issues with dynamic compilation.

    One important step you have to take before you run this branch is to configure your .NET Core nuget sources to include the DevExpress specific URL with our own access key. Details about our nuget repository can be found here and my recent blog post describes how to configure a Linux system correctly. I have not tested this yet, but I assume the instructions from my post apply to Mac computers as well.

    The front-end project for this branch is configured with a reference to DevExtreme.AspNet.Core, and the modules-install step of the README documentation calls dotnet restore. The DevExtreme assembly will only be restored if you have configured your nuget sources correctly! Please feel free to get back to me if you need any further help with this.

    Details

    Generally, the controls from our DevExtreme ASP.NET MVC Core package work as generators, translating the Razor syntax you use in your views into jQuery calls that are embedded in the rendered view HTML. As a result, the views DataGrid.cshtml and PivotGrid.cshtml are translations of the original JavaScript code. Here’s a snippet from DataGrid.cshtml:

    ...
    .Summary(summary => {
        summary.TotalItems(ti => {
            ti.Add()
            .Column("date1")
            .SummaryType(SummaryType.Max);
            ti.Add()
            .Column("int1")
            .SummaryType(SummaryType.Avg);
            ti.Add()
            .Column("int1")
            .SummaryType(SummaryType.Sum);
        });
        summary.GroupItems(ti => {
            ti.Add()
            .Column("date1")
            .SummaryType(SummaryType.Max);
            ti.Add()
            .Column("int1")
            .SummaryType(SummaryType.Avg);
            ti.Add()
            .Column("int1")
            .SummaryType(SummaryType.Sum);
            ti.Add().SummaryType(SummaryType.Count);
        });
    })
    

    One major difference compared to JavaScript is that the initialization code uses a fluent structure of method calls and lambdas. This results in a compact structure and a good Intellisense experience while writing the code.

    You can also see that “known values” used in JavaScript are represented by enums. As long as Intellisense is available, this is easy enough to work with - as it happens, Razor Intellisense is quite flaky for me in VS 2017, and then it’s a bit of a hassle in comparison. Importantly, this makes your initialization code statically typed, which is expected in the .NET environment.

    Translating the code to set up the widgets was easy, and I used the code of the demo DevExtreme.NETCore.Demos (from the DevExtreme package) in cases where structure or enums were not obvious to me.

    Setting up the data sources for the grids turned out to be most complicated aspect. The controls actually have impressive support for data binding, which is documented here in detail. However, I already have a JavaScript implementation of the custom store that interfaces with my JavaScript service, and I want to use that - the examples in our documentation mostly refer to scenarios where the data service is implemented in .NET.

    The steps I had to make were simple in the end. For the DataGrid, I initialized my data source using a small JavaScript block:

    <script>
        var dataSource = new DevExpress.data.DataSource({
            store: dataStore
        });
    </script>
    

    The dataSource variable is then injected into the grid configuration using this syntax:

    @(Html.DevExtreme().DataGrid()
    .ID("grid")
    .DataSource(new JS("dataSource"))
    ...
    

    The only difference to the JavaScript version of my code is that the convenience initialization mechanisms for the dataSource property are not available in Razor, so I had to set up my own DataSource instance.

    For the PivotGrid, things are a bit different because the data source setup is more complicated and includes the pivot field configuration. Our documentation as well as the demo code show the use of the Store method, which again supports various data binding scenarios where you’re either using a read-only web service or a WebApi service in your .NET Core server application. However, it turned out the overload that would allow me to pass a new JS(...) is currently missing from this method. This has been noted and it will be fixed soon.

    Meanwhile, I ended up using a workaround with the Option method, which can set any JavaScript property on the object being configured. At the end of PivotGrid.cshtml you can see the configuration line that binds the PivotGrid to my existing custom data store:

    ...
    .Option("store", new JS("dataStore"));
    ...
    

    Try it!

    Please give this new front-end version a spin and let me know what you think!

LIVE CHAT

Chat is one of the many ways you can contact members of the DevExpress Team.
We are available Monday-Friday between 7:30am and 4:30pm Pacific Time.

If you need additional product information, write to us at info@devexpress.com or call us at +1 (818) 844-3383

FOLLOW US

DevExpress engineers feature-complete Presentation Controls, IDE Productivity Tools, Business Application Frameworks, and Reporting Systems for Visual Studio, Delphi, HTML5 or iOS & Android development. Whether using WPF, ASP.NET, WinForms, HTML5 or Windows 10, DevExpress tools help you build and deliver your best in the shortest time possible.

Copyright © 1998-2018 Developer Express Inc.
All trademarks or registered trademarks are property of their respective owners