Enterprise Architecture

Genio ASP.Net MVC target

The Genio ASP.Net MVC target delivers enterprise grade web application. Each of these applications is able to manage hundreds of user interfaces with a normalized usability. They manage authentication, logging, security, data integrity and persistence.

General Architecture

Diagram of Enterprise Architecture

This diagram visualizes the main components of a consolidated Enterprise Architecture for Genio generated applications. Its main objectives are:

  • To maximize scalability options by clearly identifying isolatable service domains
  • To enable a DevOps modus operandi
  • To create a general guideline for how Genio generated systems need to be deployed

Each of the modules represents a set of functions that can be deployed in isolation and, in most cases, can be scaled through clustering or made highly available through failover.

Certain modules like SIS represent placeholder for any specific business module. I.e. for a certain solution there can be different SIS that represent different business domains. This means the solution is flexible to choose to deploy all business services in the same machine or allocate dedicated hardware to each one.

The databases follow a very similar guideline. Db's can be housed in the same SQL server or they can be distributed to dedicated Sql instances and dedicated hardware.

Applicational server [SIS]

The main application generated by Genio. It contains all the user interfaces and business rules to process the user requests. This application is structured in layers for the Front end, Business layer, and Data layer.

Diagram of System Organization

Front End Layer

Technology: ASP.Net MVC, Html5, Razor, VueJs

Model–view–controller (MVC) is a software architecture pattern which separates the representation of information from the user's interaction with it. The model consists of a proxy to the application data, business rules, logic, and functions. A view can be any output representation of data, such as a chart or a diagram. Multiple views of the same data are possible, such as a bar chart for management and a tabular view for accountants. The controller mediates input, converting it to commands for the model or view.

MVC structure organization

Our common application framework build upon the ASP.Net MVC technology to connect the model with the business logic and map it onto a ViewModel. This transformation allows front-end logic reusability between different requests from the user, interactive and statefull refreshing of data, and an efficient data format to provide to the views.

Views take this interface centric representation of the data and render it onto the final interface the user will interact with. This can be done in simple Html5 (html+css+js) through Razor templates, or in a SPA (single page application) with use of the VueJs framework.

Business Layer

Technology: C#

The business layer contains all the entities, their rules and interactions. It provides among many other capabilities:

  • Entity data objects
  • Entity relational joining
  • Data business validation
  • Role validation
  • Derived data calculation and propagation
  • Interface independent authentication and authorization

Front end layers and services can reuse all the business api's present here for a consistent business execution while maintaning the flexibity of creating many different services over it.

Data Layer

Technology: ADO.Net

The data layer isolates all concerns regarding persistence of data. It abstracts away the specific Sql provider through 2 main means:

  • Abstracting the sql provider library according to the configuration. It provides a common API that the above layers can use with an assurance of portability.
  • Abstracting the sql syntax dialectic specificities. It provides a declarative programatic API, similar to LINQ, that describes the query to be sent to the database provider.

Among other secondary functions, this layer also supplies providers for configuration loading, file service persistence and full text indexation provider abstrations.

Configuration and Integration [ADM]

The administration portal provides a business aware service for all the maintenance and background tasks related to that business domain. From a deployment standpoint its the entry point to all configuration and maintenance tasks.

The configurations span all areas of the system:

  • Database provider
  • Interface formatting
  • User roles and permissions
  • Enable external notifications
  • Endpoint configuration for emails, reports, file services

The administration portal also provides the creation and maintenance for the database schema. During version upgrades of the business schemas, the maintenance service will detect the version mismatch and will allow for the incremental schema upgrade to the new version, or, in recovery cases, a full indenpotent rechecking of the entire schema.

The business domain also comes with a set of entities that will publish any changes made to them. The administration interface will allow to define wich ones it wishes to publish to the outside and if they will follow a transactional jounaling or not.

On the other end of the integration, the administration portal also houses a set of message processors that are designed to be called asynchronously for long running tasks.

Data audit trails can also be browsed though dedicated interfaces of the administration portal. These audit trails can generate large amounts of information so archival functions are also made available here.

Message Broker and Scheduling [QuidServer]

Long running service that is used to schedule tasks, and to route messages between systems. It enables the presence of multiple Genio generated systems without them having to know anything about each other. This allows the systems to remain passive and scalable, while QuidServer remains the only active part that orquestrates and drives the integration in between systems.

Diagram of QuidServer architecture

Quidserver will provide a local dashboard where its full configuration and observation can be made. This will include a number of metrics of the information flows passing trough it.

The message broker will be configured with all the message sources and the protocol to use to fetch them (notably: MSMQ, SOAP, SQL, and REST). A set of message processors will them be configured wich by their nature only allows SOAP/REST protocols (that assume a processing-able service is behind them). The final step is to configure all the intended routings between systems.

The other main functionality is to schedule the timed execution of tasks on the connected Systems. This can be used to schedule maintenance tasks, to drive business background processes, to enable external notification processing, or any other recurrent task that is needed.

External Components

Load balancer

This component is intended only as an illustration of the need for all these services to allow for high availability scenarios where, to face additional load, any part of the system can be duplicated to another server to process that extra load. This requirement translates into a general need for each system to keep their internal state to a minimum or to externalize it to a different specialized service that allow concurrent access and lock management.

Api gateway

The growing necessity of system integration and automation requires that standardized, flexible, and discoverable APIs are supplied for all the core functions of a system. Instead of making integration-specific APIs that become bounded by contractual means to a signature that can never change or evolve, a modern API can respond in a more generic way to the request being made. This allows to use the same API for all third-party clients rather than have to design a specialized endpoint for each one. A gateway will then provide the discoverability of the API capabilities by allowing them to be browsed. Also, it may grow to provide composability to allow cross-system requests of information.

Sql provider

Database Service that will hold the data for the system. The SIS component will support multiple providers, enabling different kinds of databases to be used.

A clone of the Database Service that is used to enable high availability scenarios by having a synchronized copy of the database. To take advantage of this copy, we can route read only requests to this database service.

For archive, provision a specialized database to hold all the data considered out of date for the system, and does not need to be accessed directly by the application. Access to this information might be slow and it might be done though more indirect interfaces. The indexing strategy for this data may also be specialized.

File provider

A file service will abstract away the details of how a file is persisted. Making the file repository separate from the database allows for scenarios where multiple Systems will be able to access the same file, avoiding storage duplication and communication overhead. Each system will hold instead of the file, the relevant information of that file: Its origin, its identification, its metadata.

  • The origin needs to be a configurable endpoint, with url and access authorization information.
  • Its identification cannot be a transient token, it needs to be a permanent locator.
  • The metadata (like the filename, filesize, extension, etc) avoids further transmissions in between systems when performing operation like listing all the files.

Fulltext Search provider

A specialized database to index full text data to enable efficient partial text queries, similar text queries, fuzzy queries, and other such specialized search functions. This kind of service usually requires an explicit command to index a piece of data and all its associations, in what is commonly referred to as a document. The querying language is also specialized to these kinds of operations, often involving linguistic context like multiple languages, thesaurus, stop words, etc.

Metrics database provider

A specialized database to persist time series data with a high intake and low storage profile. This is highly desirable for aggregating monitorization metrics for all kinds of system signals. The timeseries format takes full advantage of the sequential nature of the data, of its repetitive nature and adjusts dynamically to new metrics without requiring a full re-schema maintenance of the database. This is essential for receiving data from multiple heterogeneous systems that might have different metrics, different versions and different configurations.

The collection of monitorization data is often blocked by security access to the systems when its attempted remotely from a central location. It is often much easier to deploy multiple agents locally near each service to collect the data, normalize the data and route it through a single channel to the metrics aggregation database. It is also useful from a performance perspective to have the systems behave differently in the presence of a monitoring agent than in its absence, avoiding the performance overhead of data transmission.

Lifecyle Manager

Rather than a single component, this layer encapsulates all the mechanisms that enable DevOps and are common to all systems, relying of them to provide handles to isolate the manager from the system specific details. Instead, its functions are to orchestrate and overview all the other services. Some examples of its functions will be: To provide metric dashboards; to automate the download and upgrade of a new version of a system; to keep the configuration of the infrastructure; to ensure the environment requirements are correct; to provide recovery operations like rollback and restores; etc.