Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Current »

Key Points


References

Reference_description_with_linked_URLs_______________________Notes______________________________________________________________

















Key Concepts



  1. Define Containerized, Spring Boot-based Java applications using RESTful web services as APIs.

a Java app built with Spring Boot provides a structured set of Spring services versus the standard Spring framework that offers modules to use as needed: Core, JPA, Security, MVC etc. It is similar in concept to Grails which came out earlier.

a Containerized app or service runs in a container with it's dependencies localized. The container provides another layer of security and isolation from the runtime platform. It's common to run a single service in 1 container but it's possible to package more than 1 in a container if there was a use case for it. For a Spring Boot app, the ports need to be mapped and exposed through the container to clients to receive REST requests.

any RESTful service essentially uses standard HTTPS protocol ( GET, POST, UPDATE, DELETE, PATCH ) to receive requests from a client app or service.

A RESTful service provides an API for a client to make requests and receive responses.

If the API is exposed to a wide variety of clients ( especially externally ), an API manager ( DataPower etc ) can be used to control access to the service providing security, load balancing features etc

If services are to be locally consumed by an app, they can be connected by a service mesh or, for better performance, packaged as dependent libraries bundled in the container directly.

Larger organizations and companies exposing APIs externally will usually choose an API end point management solution offering end point security, dynamic routing, load balancing and other features. Cloud providers can deliver these services. On-premise, solutions like DataPower can provide end-point management.


  1. Do you have hands on experience implementing Containerized, Spring Boot based Java Applications using RESTful Web Services as API?
    1. If Yes –
    2. Was this greenfield?
    3. Why was this the chosen path?  What other applications or tech did you explore to get to this?
    4. What was the team dynamic put in place to create success?
    5. What are the best practices when implementing?
    6. Was it successful or not?  What did you learn good/bad/indifferent that you would apply to the next time?


    1. If No – What direct knowledge do you have of Containerized, Spring Boot Java Applications Using RESTful Web Services as API

I have not containerized a Spring Boot app. I have containerized Java web services, Node.js microservices, MySQL database, Hyperledger blockchains with CouchDB, Node.js, GO using Docker and Docker Compose. I also completed a lab on running blockchain Microservices in a Kubernetes network ( PODS running Docker containers ). There is nothing that makes Spring Boot fundamentally different in containers than other Java apps. On the other hand, I have no idea how complex the client's Spring Boot apps are or the work needed to containerize them in Docker. The concepts are the same but the work depends on their inventory.

Before Spring boot existed, I used Grails for the same purposes at Fidelity and other clients. Grails showed the value an integrated Java stack delivers. Grails uses Groovy, a Java superset, rather than base Java.

I tested Spring Boot building some basic apps only - nothing in production to compare it to Grails apps. Sprint boot performs better because it uses Java versus Groovy among other reasons. Grails offers greater developer productivity and is appropriate for low-volume production apps or test environments.

At Fidelity we used Spring frameworks but Spring boot did not exist then.  At PTP we did create simple Spring boot applications to load data to the ODS.  I was able to improve loader performance 300% by removing Spring / Hibernate and creating simplified batch JDBC loaders. That was a better solution for that specific use case.

At DMX we use Node.js for services except for some Java utilities I wrote.


  1. What Service Oriented Architecture tech have you used?
    1. REST
    2. SOAP
    3. RMI
    4. MQ
    5. RPC
    6. APPC
    7. Web sockets
    8. BCU
    9. custom IoT interfaces - JobKeeper, Toledo scales, Remanco terminals
    10. 5280 CAMs
    11. sftp

sftp was batch. The rest were real-time, integrated message services between a producer and a consumer(s)


  1. Do you have direct hands on exposure to Microservices based frame work?

Yes ...

Fidelity Java web services were built normally as microservices but DID NOT run in containers with some exceptions

At PTP I created a MySQL Docker container to run MySQL in RHEL VM

At DMX, we have created 12+ microservices for an auto marketplace: authentication, media, vehicles, vehicle-info, auctions, programs, payments, etc

The microservices were moved into Docker to simplify the deployment across different environments using Circle CI on IBM Cloud

we also integrated with a wide variety of vendor and government APIs to add value, content to our services

at a later point, the plan is to consider providing APIs to vendors and potentially, anonymized data sets as additional revenue

    1. If yes –
    2. What was the decision to use this? 

The goal was a highly performant cloud marketplace that would scale easier, be easier to administer vs traditional monolith services

At DMX in 2016, Javascript and Node.js had more dynamic, asynch, event-driven architectures support than many Java stacks so the decision was made to use Node.js for services which has worked well so far

    1. Does this apply to certain areas and environments better to one vs. others?

developers work with local copies of an app or service without containers - no need, easier debug etc

the services are packaged as image files and deployed in the environment when ready after Circle CI completes the build

if you look at CICD going through SIT, Staging and Production at a minimum, having separate microservices that can auto-scale as needed and map to different environments easily improves velocity, quality, support etc

When we have a database cluster, that will often operate better without container overhead. The database servers are normally dedicated environments and get more specialized focus though at PTP I did put MySQL into Docker for dev environments

The data services and application services can be more flexible when they are deployed in containers.

    1. Which one is better and why?  (Not looking for an academic reply, rather a hands on example)

At DMX, velocity on deployments improved dramatically with CICD and containers so quality and end-user test cases are better integrated with the development process

    1. What did you learn from the previous exposure – with that knowledge how would you proceed differently if at all

For most use cases on apps, services, database etc, only the database servers really worked better outside containers for the use cases I covered.

    1. If No –
    2. What direct knowledge do you have of Microservices in a practical application?


  1. What is your hands on experience with AWS native vs AWS?
    1. Which is a better service and why?

At DMX we used AWS for MongoDB and IBM cloud for our marketplace - microservices and media management.

The IBM environment was easier to administer than AWS.

In our scenario we choose to create standard microservices using open-source npm modules and frameworks instead of using the the platform specific services IBM provided preserving our application portability at minimal cost across cloud platforms.

The native AWS services are easier to setup in many cases than equivalent open-source products but may lack features needed for specific uses cases in addition to the portability issues.

Hyperledger Fabric on AWS is a better solution in most scenarios to create a trusted network than the equivalent internal AWS DLT ( QLDB ).  If you only want a permanent ledger for transaction history, QLDB is simpler and a good choice.


  1. Have you been responsible for selecting a message bus system.
    1. If yes --
    2. What products did you look at and why?  

I created a message bus in a database for a bond trading app that delivered 85% of the performance of MQ in our use case benchmark

at Fidelity I was responsible for our MQ bus used for an eworkflow reporting app that produced client reports in multiple steps across multiple systems
and for consolidation of operations logs from our 200+ servers for analysis

at PTP we used RabbitMQ for posting IVR call data from Genesys systems to our ODS in MySQL using Java clients

at DMX, the Hyperledger Fabric network I created for VINblock blockchain used Kafka to dynamically update all peers in the network concurrently

    1. Why did you select the system and successes/areas for improvement learned

MQ ( and related message servers: RabbitMQ, ActiveMQ etc ) are well designed for simple pub / sub processing where messages are handled by a single processor. For many other scenarios, Kafka is a high performance, distributed messaging system supporting multiple delivery patterns:  message queuing, distributed replication. Kafka is a pull system where client pulls messages from Kafka producers. Messaging systems normally publish ( or push ) messages onto a queue

    1. If no –
    2. What is your exposure to those systems?
  1. How can a client develop an application hosted in cloud that’s not cloud native?

Not an issue. A cloud native app is often using proprietary services delivered by the specific cloud provider that will scale easily in that platform. DynamoDB from AWS is a NoSQL database that can perform well but doesn't match the functionality or portability of MongoDB.

  1. What challenges are faced when advising a client on a new software solution and or methodology?

We all have biases based on our experiences on platforms. The client will as well.  If the client is looking for the best solution to fit their specific use case, then we typically need to compare and evaluate the 2 or 3 top technical platforms for that use case. I've been through a variety of technical benchmarks on database, containers, Web services platforms. The time and cost to quickly validate the best platform or services usually pays big dividends.

  1. What do you need to ensure a successful project?

A client that has a strong business case or mandatory requirement. A clear priority from their user community or network that ranks features, resources and delivery time frames in the right order. A clear understanding by all parties on the responsibilities and risks for the business, project and technical dependencies to deliver the solution. A process for making decisions that all parties support. An agile delivery process that identifies and resolves issues quickly. A clear understanding by all parties on what the new solution is and how it will work. Accurate analytics on key factors: volumes, security protocols, performance, user journeys, data quality and more. Oh yeah. Great documentation and test cases on the existing systems.

I'm not asking for anything unreasonable, right?

Top questions for this project:

1> do we have a clear definition of priorities for current and future requirements from TSA?

2> are we doing a tactical migration ( quick and cheap ) or a strategic migration ( designed to meet / exceed future TSA requirements )?




Potential Value Opportunities





Potential Challenges


Cloud vs Cloud Native development

https://www.digitalistmag.com/cio-knowledge/2018/09/27/understanding-distinction-between-cloud-based-cloud-native-application-development-06187471

While cloud-based and cloud-native development share many characteristics, cloud-native development differs from browser-based development in important ways.

For starters, cloud-native development refers to application development that is container-based, dynamically orchestrated, and leverages microservices architectures as per the CNCF’s definition of cloud-native development. Because cloud-native applications run in containers and are dynamically orchestrated, they exhibit many of the attributes of applications deployed in cloud-based infrastructures, such as elastic scalability and high availability.

Linux Foundation definition of CN development

https://github.com/cncf/toc/blob/master/DEFINITION.md

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.


Candidate Solutions



Step-by-step guide for Example



sample code block

sample code block
 



Recommended Next Steps



  • No labels