API - How Tos
Aggregates / Rollups
How to implement ?
- Database SQL has evolved to offer advanced aggregation and grouping abilities. Why not leverage it?
- To get any type of aggregation on the data returned from your database and thru APIs, first create database views on the tables, with any/all table joins and groupings.
- One can also join 2 views with each other for multi-level aggregations or transformations.
- One can form multi-level view utilizing
WITH
blocks in SQL if supported by database being used. - Then import final view object into Builder Studio and generate code on it.
- The APIs generated on such object will fullfill data aggregation needs of your business requirement.
Backend Templates
Templates are available for Spring Java backend. Coming soon for Express Node.js backend. One can low-code customize to implement them.
Pease refer to section Backend Templates
Code Coverage - SonarQube, JaCoCo
- Code Coverage Analysis
Extend
- Identifiers, @GeneratedValue
- Validations
- DTO (Data Transfer Object)
Security
- OAuth2
- Keycloak
- Social (Google, Github, ...)
Logging - ELK (Elasticsearch, Logstash, and Kibana)
Integrating - Kafka, Kafka Streams
Testing
- System Testing - Performance, Load testing
- TDD (Test Driven Development)
- Smoke Testing
- BDD (Behavior-Driven Development)
Migrate to Spring Boot 3 (Spring Boot 3.2.3 and Java 17 or Java 21)
How to Use Redis Cache
Redis cache can store and retrieve data quickly to improve the performance of your APIs. It is primarily designed as an in-memory data store for high-performance, low-latency data access.
How to run Redis From Docker
docker pull redis:7.0.6-alpine
docker run --name my-redis -p6379:6379 -d redis
How to Stop redis docker container:
docker stop my-redis
Enable Redis Cache in application.properties
For dbrest REST APIs:
- Edit
emapi\app\dbrest\src\main\resources\application.properties
- Edit
For dbgraphql GraphQL APIs:
- Edit
emapi\app\dbgraphql\src\main\resources\application.properties
- Edit
Locate section as below and follow instructions to enable:
# ---- Disabled redis cache state -------------
spring.cache.type=simple
...
# ---- Enable redis cache
# ---- By commenting above and un-commenting below -------------
...
Event-Driven Microservices - Kafka
Build Event-Driven Microservices applications using Spring Boot and Apache Kafka.
Available with EasyManage Templates : Backend Templates : Kafka
Kafka Streams
Kafka Streams enables the processing of streaming events.
Available with EasyManage Templates : Backend Templates : Kafka Streams
Distributed Transactions in Microservices
How to manage distributed transactions across multiple microservices and also solve the problem of distributed locking ?
Distributed transaction in microservices means a local transaction in the monolithic system is distributed into multiple services that will be called in a sequence.
Possible solutions
The following two patterns can resolve the problem:
- Two-phase commit - 2pc
- Saga Pattern
Two-phase commit - 2pc
Two-phase commit Is used in database systems, not so much good fit for microservices.
- In microservices, it is implemented as per below.
- Uses two phases - A prepare phase and a commit phase
- With a Global Coordinator across microservices.
- Gives strong consistency and guarantee that the transaction is atomic.
- But is synchronous (blocking), need to lock the object till transaction completes.
Saga Pattern
In Saga Pattern
- The distributed transaction is fulfilled by asynchronous local transactions on all related individual microservices. And finally completed.
- The microservices communicate with each other through an event bus.
- Each microservice fulfills its own local atomic transaction, other microservices are not blocked and there is no lock (kept throughout transaction) on any object.
- In cae of failure of one operation within a sequence on a microservice, will results in all prior transactions being reversed using compensating strategy.
Notes:
- Challenges: Can be diffcult to debug and maintain.
- Add a process manager as an orchestrator.
- Please refer to earlier section on Event-Driven Microservices - Kafka to implement such event based transaction processing.
Serverless
- Available with Spring Java Backend
Enabling Serverless
- Go Serverless (Functions) with Spring Cloud Function
Spring Cloud Function
WHY ? / HOW ?
Spring Cloud Function - Goals:
- Promote the implementation of business logic via functions.
- Decouple the development lifecycle of business logic from any specific runtime target
so that the same code can run as a web endpoint, a stream processor, or a task.
Spring Cloud Function provides the following features:
1. Wrappers for @Beans of type Function, Consumer and Supplier,
exposing them to the outside world as either HTTP endpoints
and/or message stream listeners/publishers with RabbitMQ, Kafka etc.
Spring Cloud Function embraces and builds on top of the 3 core java functional interfaces:
Supplier<O>
Function<I, O>
Consumer<I>
Please refer to Spring docs: Spring Cloud Function
How To Use
- The generated code provides Templates For: Spring Cloud Function, e.g.
emapi\lib\base-app\src\main\java\com\example\emapi\app\ErpCustomer\ErpCustomerServiceCloudFunctions.java
- These can be Integrated Further with any target Cloud: AWS, GCP, or Azure
Leverage them and Trigger via AWS Lambda, or deploy Spring Cloud Functions on any of AWS, Azure or Google Cloud.
AWS Lambda
Please refer to Spring docs:
Microsoft Azure Functions
Please refer to Spring docs: Microsoft Azure Functions
Google Cloud Functions
Please refer to Spring docs: Google Cloud Functions