Ir al contenido principal

Architectural decisions on a recent project

We can recall that the only real boss of the architect is requirements. The architect may not listen to us, to developers, to customers, to anyone. But the requirements are the indisputable boss.
So, there is not mystery neither magic over the decision making process, it must be ruled by the requirements.

But, what was this project about? In fact, it was about consuming a SOAP service provided by a third part, to allow our internal business side to update the customer case-file or unique digital profile. As part of this project, the number of request from our platform to the provider side must be under certain control or below a number we can consider acceptable for our consuming rate.
Also, for both services, our own service and the one from the provider, working with bio-metric data is a must, it involves fingerprints and face recognition.
So let’s start looking deeper into this project.





Q- Which kind of architectural approach are we going to use?

We can use a Multi Layer approach, a Monolithic one, Micro-services, etc.
I realized that, because our expected demand on the service; it means the load and the behavior; there will be a need for a kind of horizontal scaling feature for this solution. The only actual thing micro-services are good for is providing amazing horizontal scaling behavior, then using a micro-service architecture sound pretty logical in this case.

A- We are going to use a micro-service architecture.
(This is eaten bread!!)

Q- Where will do deploy our solution?

Are we going to use our own servers? Is there a chance for cloud computing?
Browsing into the documents I found a request for preferring our MS Azure and Pivotal Cloud Foundry platform over other deployment strategies for new solutions. It was related to lower costs and shorter time to production parameters.

A- MS Azure and Pivotal Cloud Foundry are selected as the deployment platform.
(I got paid for doing this, it feels like stolen from the company!!)

Q- Which environments are we going to use?

It sounds like an easy one because for all our cloud solutions we have and accelerator assistant that provides us with a complete Git and source control platform, a documentation tool, a JIRA project, and the associated environments for Devops processes and pipelines. Those environments include the Development server, the UAT server and others.

A- We are going to use our accelerator pipelines and platform.
(Such easy)

Q- How are we going to secure our solution?

The provider told us that it would be a service with a double certificate based security from their side. Also, there was a requirement for connecting different devices to our services, those devices may include smart phones, tablets, ATMs, and the regular applications from our platform.
Our cloud platform already support OAUTH 2.0 authorization flows, including JWT tokens and certificate secure access for our solutions. It makes easy to relay on our platform to accomplish the security standards for our solution approach.

A- We will relay on our established OAUTH 2.0 flows for securing REST services over a HTTPS channel.

Q- What kind of persistence strategy will be used for this project?

Once again, reading the requirements, there is a feature for keeping the requested customer’s data for a previously defined period of time, according to business needs. It implicates that as soon as our services will be running into the cloud with a micro-service architecture, our persistence mechanics have to be on the cloud, with auto replication, balancing, and scaling features supported in such easy way.
Being Pivotal Cloud Foundry (PCF) our main platform for micro-services running over the MS Azure services, is natural to look at some SQL Server DB services that accomplish our goals. In fact, this kind of service are provided by default over this platform.
So, SQL Server DB service will be our main choice for persisting the solution’s data.

A- SQL Server DB service over PCF will be used for this solution.

Q- What about the service orchestration?

It is a very interesting side for this project. There will be some kind of long time transactions associated with keeping the customer’s data into the “cache” and later request for updating this data automatically.
Some colleges said that running a batch file could be a choice for accomplish this task. I didn’t agree. Those kind a files aren’t good enough in a cloud platform. Also, we will need more control over the decisions we have to take over the flow and the execution of the steps of these processes.

After thinking a while on this requirement, I realized that a very light workflow engine embedded into the micro-service will perform those task in a very fancy and powerful way.
The previous description looks like an accurate case for the Camunda BPM engine.

A- The Camunda BPM engine will be used for the services orchestration.


In resume, we are going to implement a couple of REST services using Spring-Boot framework, OAUTH 2.0 facilities over an HTTPS secure point, Java will be our programming language, certificates are going to be used for communicating with the provider’s service. Pivotal Cloud Foundry and MS Azure will be our main platform for cloud deployment, our accelerator internal flow and app will be use for providing the pipeline and servers for the solution, Camunda BPM will be used for services orchestration, SQL server DB service over azure will provide me Persistence layer. Micro-services architecture will be the model for constructing this solution.

Comentarios

Entradas populares de este blog

El Melange todavía corre

Ese era el estribillo de un capítulo de unas de mis series favoritas de la infancia, Meteoro o Speed Racer. En ese capítulo un auto “fantasma” el X-3, aparecía de imprevisto y dejaba a todos asombrados con su rendimiento y prestaciones y volvía a desaparecer. Traigo ese episodio a colación puesto que recientemente sostuve una amena charla con un querido amigo, en la que el me manifestaba como los Mainframes habían muerto, o mejor dicho, el concepto de la computación distribuida basada en Mainframes había desaparecido. Para variar, yo no estuve de acuerdo, y le dije que por el contrario, el modelo de computación basado en Mainframes está mas vigente que nunca. Estos fueron mis argumentos:

Primeros pasos con Camunda BPM – Modelando un Proceso BPMN 2.0

Tenemos entre manos la tercera publicación de nuestra serie sobre la Plataforma de BPM de Camunda .  El día de hoy vamos, por fin, a empezar a modelar o construir nuestro primer proceso sencillo en notación BPMN 2.0. Para ello vamos a usar el modelador o editor que ya hemos instalado en nuestra primera publicación , y vamos a guardarlo en la sección de recursos del proyecto Maven Java que configuramos en la segunda publicación . Así que, como ya es costumbre, manos a las sobras…

Como configurar jBPM para usar nuestra propia Base de Datos en un sólo paso

Llevo un buen rato trabajando con jBPM en su serie 6.x, y mi opinión sobre este producto en la versión mecionada no ha mejorado para nada. Es una herramienta plena de funciones y caracteristicas avanzadas, pero tambien está llena de Bugs y es realmente inestable, sobre todo en el ambiente de modelamiento.  Así mismo, debo decir que tiene una muy aceptable API REST y que el motor de procesos y la consecuente ejecución de los procesos es estable y bastante rápida. En esta publicación daré inicio a una serie de artículos que hablan sobre ciertas configuraciones comunes e importantes que se hacen con jBPM. Hoy iniciamos con la configuración de jBPM para que use nuestra base de datos favorita. Esto tiene sentido porque el producto viene con la base de datos H2 por omisión, la cual es excelente para pruebas y evaluaciones rápidas de la herramienta, pero es completamente inaceptable en un ambiente de desarrollo, QA o producción cualquiera. Así que manos a l