Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Key Points

References

...

Table of Contents

Key Points


References

Reference_description_with_linked_URLs_________________________Notes____________________________Build tools

https://jenkins.io

https://jenkins.io/doc/

Jenkins
https://jenkins-x.io/about/Jenkins-x
https://circleci.com/dashboardCircle CI dashboard
Azure Pipelines

Key Concepts

Deployment targets

Server

Cloud

Container

Kubernetes

Fabric

CICD tools compared

Jenkins with Blue Ocean - visual pipeline builder

https://jenkins.io/doc/book/blueocean/

Blue Ocean rethinks the user experience of Jenkins. Designed from the ground up for Jenkins Pipeline, but still compatible with freestyle jobs, Blue Ocean reduces clutter and increases clarity for every member of the team. Blue Ocean’s main features include:

  • Sophisticated visualizations of continuous delivery (CD) Pipelines, allowing for fast and intuitive comprehension of your Pipeline’s status.

  • Pipeline editor - makes creation of Pipelines approachable by guiding the user through an intuitive and visual process to create a Pipeline.

  • Personalization to suit the role-based needs of each member of the team.

  • Pinpoint precision when intervention is needed and/or issues arise. Blue Ocean shows where in the pipeline attention is needed, facilitating exception handling and increasing productivity.

  • Native integration for branch and pull requests, enables maximum developer productivity when collaborating on code with others in GitHub and Bitbucket.

This chapter covers all aspects of Blue Ocean’s functionality, including how to:

This chapter is intended for Jenkins users of all skill levels, but beginners may need to refer to some sections of the Pipeline chapter to understand some topics covered in this Blue Ocean chapter.

For an overview of content in the Jenkins User Handbook, see User Handbook overview

Blue Ocean video

Widget Connector
urlhttp://youtube.com/watch?v=mn61VFdScuk

https://jenkins.io/doc/book/blueocean/getting-started/

Jenkins-X

https://jenkins-x.io/

https://jenkins-x.io/about/

...

Jenkins X is designed to make it simple for developers to work to DevOps principles and best practices. The approaches taken are based on the comprehensive research done for the book ACCELERATE: Building and Scaling High Performing Technology Organisations. You can read why we use Accelerate for the principals behind Jenkins X.

...

“DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.”

The goals of DevOps projects are:

  • Faster time to market
  • Improved deployment frequency
  • Shorter time between fixes
  • Lower failure rate of releases
  • Faster Mean Time To Recovery

High performing teams should be able to deploy multiple times per day compared to the industry average that falls between once per week and once per month.

The lead time for code to migrate from ‘code committed’ to ‘code in production’ should be less than one hour and the change failure rate should be less than 15%, compared to an average of between 31-45%.

The Mean Time To Recover from a failure should also be less than one hour.

Jenkins X has been designed from first principles to allow teams to apply DevOps best practices to hit top-of-industry performance goals.

...

The following best practices are considered key to operating a successful DevOps approach:

  • Loosely-coupled Architectures
  • Self-service Configuration
  • Automated Provisioning
  • Continuous Build / Integration and Delivery
  • Automated Release Management
  • Incremental Testing
  • Infrastructure Configuration as Code
  • Comprehensive configuration management
  • Trunk based development and feature flags

Jenkins X brings together a number of familiar methodologies and components into an integrated approach that minimises complexity.

...

Jenkins X builds upon the DevOps model of loosely-coupled architectures and is designed to support you in deploying large numbers of distributed microservices in a repeatable and manageable fashion, across multiple teams.

Image Removed

...

Image Removed

...

Jenkins X builds upon the following core components:

...

At the heart of the system is Kubernetes, which has become the defacto virtual infrastructure platform for DevOps. Every major Cloud provider now offers Kubernetes infrastructure on demand and the platform may also be installed in-house on private infrastructure, if required. Test environments may also be created on local development hardware using the Minikube installer.

Functionally, the Kubernetes platform extends the basic Containerisation principles provided by Docker to span across multiple physical Nodes.

In brief, Kubernetes provides a homogeneous virtual infrastructure that can be scaled dynamically by adding or removing Nodes. Each Node participates in a single large flat private virtual network space.

The unit of deployment in Kubernetes is the Pod, which comprises one or more Docker containers and some meta-data. All containers within a Pod share the same virtual IP address and port space. Deployments within Kubernetes are declarative, so the user specifies the number of instances of a given version of a Pod to be deployed and Kubernetes calculates the actions required to get from the current state to the desired state by deploying or deleting Pods across Nodes. The decision as to where specific instances of Pods will be instantiated is influenced by available resources, desired resources and label-matching. Once deployed, Kubernetes undertakes to ensure that the desired number of Pods of each type remain operational by performing periodic health checks and terminating and replacing non-responsive Pods.

To impose some structure, Kubernetes allows for the creation of virtual Namespaces which can be used to separate Pods logically, and to potentially associate groups of Pods with specific resources. Resources in a Namespace can share a single security policy, for example. Resource names are required to be unique within a Namespace but may be reused across Namespaces.

In the Jenkins X model, a Pod equates to a deployed instance of a Microservice (in most cases). Where horizontal scaling of the Microservice is required, Kubernetes allows multiple identical instances of a given Pod to be deployed, each with its own virtual IP address. These can be aggregated into a single virtual endpoint known as a Service which has a unique and static IP address and a local DNS entry that matches the Service name. Calls to the Service are dynamically remapped to the IP of one of the healthy Pod instances on a random basis. Services can also be used to remap ports. Within the Kubernetes virtual network, services can be referred to with a fully qualified domain name of the form: <service-name>.<namespace-name>.svc.cluster.local which may be shortened to <service-name>.<namespace-name> or just <service-name> in the case of services which fall within the same namespace. Hence, a RESTful service called ‘payments’ deployed in a namespace called ‘finance’ could be referred to in code via http://payments.finance.svc.cluster.local, http://payments.finance or just http://payments, dependent upon the location of the calling code.

To access Services from outside the local network, Kubernetes requires the creation of an Ingress for each Service. The most common form of this utilises one or more load balancers with static IP addresses, which sit outside the Kubernetes virtual infrastructure and route network requests to mapped internal Services. By creating a wildcard external DNS entry for the static IP address of the load balancer, it becomes possible to map services to external fully-qualified domain names. For example, if our load balancer is mapped to *.jenkins-x.io then our payments service could be exposed as http://payments.finance.jenkins-x.io.

Kubernetes represents a powerful and constantly improving platform for deploying services at massive scale, but is also complex to understand and can be difficult to configure correctly. Jenkins X brings to Kubernetes a set of default conventions and some simplified tooling, optimised for the purposes of DevOps and the management of loosely-coupled services.

The jx command line tool provides simple ways to perform common operations upon Kubernetes instances like viewing logs and connecting to container instances. In addition, Jenkins X extends the Kubernetes Namespace convention to create Environments which may be chained together to form a promotion hierarchy for the release pipeline.

A Jenkins X Environment can represent a virtual infrastructure environment such as Dev, Staging, Production etc for a given code team. Promotion rules between Environments can be defined so that releases may be moved automatically or manually through the pipeline. Each Environment is managed following the GitOps methodology - the desired state of an Environment is maintained in a Git repository and committing or rolling back changes to the repository triggers an associated change of state in the given Environment in Kubernetes.

Kubernetes clusters can be created directly using the jx create cluster command, making it simple to reproduce clusters in the event of a failure. Similarly, the Jenkins X platform can be upgraded on an existing cluster using jx upgrade platform. Jenkins X supports working with multiple Kubernetes clusters through jx context and switching between multiple Environments within a cluster with jx environment.

Developers should be aware of the capabilities that Kubernetes provides for distributing configuration data and security credentials across the cluster. ConfigMaps can be used to create sets of name/value pairs for non-confidential configuration meta-data and Secrets perform a similar but encrypted mechanism for security credentials and tokens. Kubernetes also provides a mechanism for specifying Resource Quotas for Pods which is necessary for optimising deployments across Nodes and which we shall discuss shortly.

By default, Pod state is transient. Any data written to the local file system of a Pod is lost when that Pod is deleted. Developers should be aware that Kubernetes may unilaterally decide to delete instances of Pods and recreate them at any time as part of the general load balancing process for Nodes so local data may be lost at any time. Where stateful data is required, Persistent Volumes should be declared and mounted within the file system of specific Pods.

...

Interacting directly with Kubernetes involves either manual configuration using the kubectl command line utility, or passing various flavours of YAML data to the API. This can be complex and is open to human error creeping in. In keeping with the DevOps principle of ‘configuration as code’, Jenkins X leverages Helm and Draft to create atomic blocks of configuration for your applications.

Helm simplifies Kubernetes configuration through the concept of a Chart, which is a set of files that together specify the meta-data necessary to deploy a given application or service into Kubernetes. Rather than maintain a series of boilerplate YAML files based upon the Kubernetes API, Helm uses a templating language to create the required YAML specifications from a single shared set of values. This makes it possible to specify re-usable Kubernetes applications where configuration can be selectively over-ridden at deployment time.

Feature Maturity

Definitions and processes around how features mature or are deprecated

Circle-CI

Azure Pipelines

Potential Value Opportunities

Potential Challenges

Candidate Solutions

OpenTofu

https://opentofu.org/docs/intro/

https://opentofu.org/faq/#why-use-opentofu

https://opentofu.org/docs/intro/vs/

https://opentofu.org/manifesto/

GitOps

GitOps Cheat Sheet

gitops-The Essentials of GitOps.pdf.  link

gitops-The Essentials of GitOps.pdf.  file

The "Essentials of GitOps" guide from DZone provides a comprehensive overview of GitOps principles and practices. Here are the key points:

1. **Definition**: GitOps uses Git as a single source of truth for declarative infrastructure and applications.
2. **Benefits**: It offers improved deployment consistency, faster delivery, and enhanced security.
3. **Core Principles**: GitOps is based on declarative descriptions and version-controlled infrastructure.
4. **Workflow**: Includes managing configurations in Git, automatic deployment, and continuous monitoring.
5. **Tools**: Common tools include Flux, Argo CD, and Jenkins X.
6. **Security**: Emphasizes secure Git repositories and access controls.
7. **Examples**: Includes YAML configurations for Kubernetes deployments.
8. **Best Practices**: Version control everything, use pull requests, and implement automated testing.
9. **Challenges**: Addresses potential issues like managing secrets and handling stateful applications.
10. **Future Trends**: Highlights the growing importance of policy-driven automation and AI integration.
11. **Case Studies**: Examples of successful GitOps implementations in various organizations.
12. **CI/CD Integration**: Explains how GitOps fits into the broader CI/CD pipeline.
13. **Metrics**: Monitoring and measuring success using specific metrics.
14. **Community**: Encourages participation in GitOps communities for support and collaboration.
15. **Resources**: Provides additional resources for learning and implementation.

For more details, refer to the full guide [here](https://dzone.com/storage/assets/16799903-sponsored-refcard-339-essentialsofgitops-2023.pdf).

Hashicorp Terraform License Change

Terraform was open-sourced in 2014 under the Mozilla Public License (v2.0) (the “MPL”). Over the next ~9 years, it built up a community that included thousands of users, contributors, customers, certified practitioners, vendors, and an ecosystem of open-source modules, plugins, libraries, and extensions.

Then, on August 10th, 2023, with little or no advance notice or chance for much, if not all, of the community to have any input, HashiCorp switched the license for Terraform from the MPL to the Business Source License (v1.1) (the “BUSL”), a non-open source license. In our opinion, this change threatens the entire community and ecosystem that's built up around Terraform over the last 9 years.

Hashicorp Consul

Consul alternatives

https://www.gartner.com/reviews/market/service-mesh/vendor/hashicorp/product/hashicorp-consul/alternatives

https://stackshare.io/consul/alternatives

Circle CI

NGrok - commercial SDN API gateway as a service

ngrok.com-Introducing ngroks Developer-Defined API Gateway Shifting the Paradigm of API Delivery.pdf  link

...

_______________________________________






Build tools

https://jenkins.io

https://jenkins.io/doc/

Jenkins
https://jenkins-x.io/about/Jenkins-x


https://circleci.com/dashboardCircle CI dashboard



Azure Pipelines















Key Concepts



Deployment targets


Server


Cloud


Container


Kubernetes


Fabric




CICD tools compared




Jenkins with Blue Ocean - visual pipeline builder

https://jenkins.io/doc/book/blueocean/


Blue Ocean rethinks the user experience of Jenkins. Designed from the ground up for Jenkins Pipeline, but still compatible with freestyle jobs, Blue Ocean reduces clutter and increases clarity for every member of the team. Blue Ocean’s main features include:

  • Sophisticated visualizations of continuous delivery (CD) Pipelines, allowing for fast and intuitive comprehension of your Pipeline’s status.

  • Pipeline editor - makes creation of Pipelines approachable by guiding the user through an intuitive and visual process to create a Pipeline.

  • Personalization to suit the role-based needs of each member of the team.

  • Pinpoint precision when intervention is needed and/or issues arise. Blue Ocean shows where in the pipeline attention is needed, facilitating exception handling and increasing productivity.

  • Native integration for branch and pull requests, enables maximum developer productivity when collaborating on code with others in GitHub and Bitbucket.


This chapter covers all aspects of Blue Ocean’s functionality, including how to:

This chapter is intended for Jenkins users of all skill levels, but beginners may need to refer to some sections of the Pipeline chapter to understand some topics covered in this Blue Ocean chapter.

For an overview of content in the Jenkins User Handbook, see User Handbook overview


Blue Ocean video

Widget Connector
urlhttp://youtube.com/watch?v=mn61VFdScuk


https://jenkins.io/doc/book/blueocean/getting-started/


Jenkins-X

https://jenkins-x.io/

https://jenkins-x.io/about/

Concepts

Jenkins X is designed to make it simple for developers to work to DevOps principles and best practices. The approaches taken are based on the comprehensive research done for the book

ACCELERATE: Building and Scaling High Performing Technology Organisations. You can read why we use Accelerate for the principals behind Jenkins X.

Principles

“DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.”

The goals of DevOps projects are:

  • Faster time to market
  • Improved deployment frequency
  • Shorter time between fixes
  • Lower failure rate of releases
  • Faster Mean Time To Recovery

High performing teams should be able to deploy multiple times per day compared to the industry average that falls between once per week and once per month.

The lead time for code to migrate from ‘code committed’ to ‘code in production’ should be less than one hour and the change failure rate should be less than 15%, compared to an average of between 31-45%.

The Mean Time To Recover from a failure should also be less than one hour.

Jenkins X has been designed from first principles to allow teams to apply DevOps best practices to hit top-of-industry performance goals.

Practices

The following best practices are considered key to operating a successful DevOps approach:

  • Loosely-coupled Architectures
  • Self-service Configuration
  • Automated Provisioning
  • Continuous Build / Integration and Delivery
  • Automated Release Management
  • Incremental Testing
  • Infrastructure Configuration as Code
  • Comprehensive configuration management
  • Trunk based development and feature flags

Jenkins X brings together a number of familiar methodologies and components into an integrated approach that minimises complexity.

Architecture

Jenkins X builds upon the DevOps model of loosely-coupled architectures and is designed to support you in deploying large numbers of distributed microservices in a repeatable and manageable fashion, across multiple teams.

Image Added

Conceptual model

Image Added

Building Blocks

Jenkins X builds upon the following core components:

Kubernetes & Docker

At the heart of the system is Kubernetes, which has become the defacto virtual infrastructure platform for DevOps. Every major Cloud provider now offers Kubernetes infrastructure on demand and the platform may also be installed in-house on private infrastructure, if required. Test environments may also be created on local development hardware using the Minikube installer.

Functionally, the Kubernetes platform extends the basic Containerisation principles provided by Docker to span across multiple physical Nodes.

In brief, Kubernetes provides a homogeneous virtual infrastructure that can be scaled dynamically by adding or removing Nodes. Each Node participates in a single large flat private virtual network space.

The unit of deployment in Kubernetes is the Pod, which comprises one or more Docker containers and some meta-data. All containers within a Pod share the same virtual IP address and port space. Deployments within Kubernetes are declarative, so the user specifies the number of instances of a given version of a Pod to be deployed and Kubernetes calculates the actions required to get from the current state to the desired state by deploying or deleting Pods across Nodes. The decision as to where specific instances of Pods will be instantiated is influenced by available resources, desired resources and label-matching. Once deployed, Kubernetes undertakes to ensure that the desired number of Pods of each type remain operational by performing periodic health checks and terminating and replacing non-responsive Pods.

To impose some structure, Kubernetes allows for the creation of virtual Namespaces which can be used to separate Pods logically, and to potentially associate groups of Pods with specific resources. Resources in a Namespace can share a single security policy, for example. Resource names are required to be unique within a Namespace but may be reused across Namespaces.

In the Jenkins X model, a Pod equates to a deployed instance of a Microservice (in most cases). Where horizontal scaling of the Microservice is required, Kubernetes allows multiple identical instances of a given Pod to be deployed, each with its own virtual IP address. These can be aggregated into a single virtual endpoint known as a Service which has a unique and static IP address and a local DNS entry that matches the Service name. Calls to the Service are dynamically remapped to the IP of one of the healthy Pod instances on a random basis. Services can also be used to remap ports. Within the Kubernetes virtual network, services can be referred to with a fully qualified domain name of the form: <service-name>.<namespace-name>.svc.cluster.local which may be shortened to <service-name>.<namespace-name> or just <service-name> in the case of services which fall within the same namespace. Hence, a RESTful service called ‘payments’ deployed in a namespace called ‘finance’ could be referred to in code via

http://payments.finance.svc.cluster.local, http://payments.finance or just http://payments, dependent upon the location of the calling code.

To access Services from outside the local network, Kubernetes requires the creation of an Ingress for each Service. The most common form of this utilises one or more load balancers with static IP addresses, which sit outside the Kubernetes virtual infrastructure and route network requests to mapped internal Services. By creating a wildcard external DNS entry for the static IP address of the load balancer, it becomes possible to map services to external fully-qualified domain names. For example, if our load balancer is mapped to *.jenkins-x.io then our payments service could be exposed as http://payments.finance.jenkins-x.io.

Kubernetes represents a powerful and constantly improving platform for deploying services at massive scale, but is also complex to understand and can be difficult to configure correctly. Jenkins X brings to Kubernetes a set of default conventions and some simplified tooling, optimised for the purposes of DevOps and the management of loosely-coupled services.

The jx command line tool provides simple ways to perform common operations upon Kubernetes instances like viewing logs and connecting to container instances. In addition, Jenkins X extends the Kubernetes Namespace convention to create Environments which may be chained together to form a promotion hierarchy for the release pipeline.

A Jenkins X Environment can represent a virtual infrastructure environment such as Dev, Staging, Production etc for a given code team. Promotion rules between Environments can be defined so that releases may be moved automatically or manually through the pipeline. Each Environment is managed following the GitOps methodology - the desired state of an Environment is maintained in a Git repository and committing or rolling back changes to the repository triggers an associated change of state in the given Environment in Kubernetes.

Kubernetes clusters can be created directly using the jx create cluster command, making it simple to reproduce clusters in the event of a failure. Similarly, the Jenkins X platform can be upgraded on an existing cluster using jx upgrade platform. Jenkins X supports working with multiple Kubernetes clusters through jx context and switching between multiple Environments within a cluster with jx environment.

Developers should be aware of the capabilities that Kubernetes provides for distributing configuration data and security credentials across the cluster. ConfigMaps can be used to create sets of name/value pairs for non-confidential configuration meta-data and Secrets perform a similar but encrypted mechanism for security credentials and tokens. Kubernetes also provides a mechanism for specifying Resource Quotas for Pods which is necessary for optimising deployments across Nodes and which we shall discuss shortly.

By default, Pod state is transient. Any data written to the local file system of a Pod is lost when that Pod is deleted. Developers should be aware that Kubernetes may unilaterally decide to delete instances of Pods and recreate them at any time as part of the general load balancing process for Nodes so local data may be lost at any time. Where stateful data is required, Persistent Volumes should be declared and mounted within the file system of specific Pods.

Helm and Draft

Interacting directly with Kubernetes involves either manual configuration using the kubectl command line utility, or passing various flavours of YAML data to the API. This can be complex and is open to human error creeping in. In keeping with the DevOps principle of ‘configuration as code’, Jenkins X leverages Helm and Draft to create atomic blocks of configuration for your applications.

Helm simplifies Kubernetes configuration through the concept of a Chart, which is a set of files that together specify the meta-data necessary to deploy a given application or service into Kubernetes. Rather than maintain a series of boilerplate YAML files based upon the Kubernetes API, Helm uses a templating language to create the required YAML specifications from a single shared set of values. This makes it possible to specify re-usable Kubernetes applications where configuration can be selectively over-ridden at deployment time.


Feature Maturity

Definitions and processes around how features mature or are deprecated



Circle-CI



Azure Pipelines






Potential Value Opportunities



Potential Challenges



Candidate Solutions



OpenTofu

https://opentofu.org/docs/intro/



https://opentofu.org/faq/#why-use-opentofu



https://opentofu.org/docs/intro/vs/



https://opentofu.org/manifesto/



GitOps



GitOps Cheat Sheet


gitops-The Essentials of GitOps.pdf.  link

gitops-The Essentials of GitOps.pdf.  file

The "Essentials of GitOps" guide from DZone provides a comprehensive overview of GitOps principles and practices. Here are the key points:

1. **Definition**: GitOps uses Git as a single source of truth for declarative infrastructure and applications.
2. **Benefits**: It offers improved deployment consistency, faster delivery, and enhanced security.
3. **Core Principles**: GitOps is based on declarative descriptions and version-controlled infrastructure.
4. **Workflow**: Includes managing configurations in Git, automatic deployment, and continuous monitoring.
5. **Tools**: Common tools include Flux, Argo CD, and Jenkins X.
6. **Security**: Emphasizes secure Git repositories and access controls.
7. **Examples**: Includes YAML configurations for Kubernetes deployments.
8. **Best Practices**: Version control everything, use pull requests, and implement automated testing.
9. **Challenges**: Addresses potential issues like managing secrets and handling stateful applications.
10. **Future Trends**: Highlights the growing importance of policy-driven automation and AI integration.
11. **Case Studies**: Examples of successful GitOps implementations in various organizations.
12. **CI/CD Integration**: Explains how GitOps fits into the broader CI/CD pipeline.
13. **Metrics**: Monitoring and measuring success using specific metrics.
14. **Community**: Encourages participation in GitOps communities for support and collaboration.
15. **Resources**: Provides additional resources for learning and implementation.

For more details, refer to the full guide [here](https://dzone.com/storage/assets/16799903-sponsored-refcard-339-essentialsofgitops-2023.pdf).





Hashicorp Terraform License Change


Terraform was open-sourced in 2014 under the Mozilla Public License (v2.0) (the “MPL”). Over the next ~9 years, it built up a community that included thousands of users, contributors, customers, certified practitioners, vendors, and an ecosystem of open-source modules, plugins, libraries, and extensions.

Then, on August 10th, 2023, with little or no advance notice or chance for much, if not all, of the community to have any input, HashiCorp switched the license for Terraform from the MPL to the Business Source License (v1.1) (the “BUSL”), a non-open source license. In our opinion, this change threatens the entire community and ecosystem that's built up around Terraform over the last 9 years.


Hashicorp Consul


Consul alternatives

https://www.gartner.com/reviews/market/service-mesh/vendor/hashicorp/product/hashicorp-consul/alternatives


https://stackshare.io/consul/alternatives



Ansible

https://docs.ansible.com/ansible/latest/index.html

Ansible automates the management of remote systems and controls their desired state.

Ansible requires Python 3x to run on a node

Control node

A system on which Ansible is installed. You run Ansible commands such as ansible or ansible-inventory on a control node.

Inventory

A list of managed nodes that are logically organized. You create an inventory on the control node to describe host deployments to Ansible.

Managed node

A remote system, or host, that Ansible controls.

Basic components of an Ansible environment include a control node, an inventory of managed nodes, and a module copied to each managed node.Image Added


Ansible provides open-source automation that reduces complexity and runs everywhere. Using Ansible lets you automate virtually any task. Here are some common use cases for Ansible:

  • Eliminate repetition and simplify workflows

  • Manage and maintain system configuration

  • Continuously deploy complex software

  • Perform zero-downtime rolling updates

Ansible uses simple, human-readable scripts called playbooks to automate your tasks. You declare the desired state of a local or remote system in your playbook. Ansible ensures that the system remains in that state.

As automation technology, Ansible is designed around the following principles:

Agent-less architectureLow maintenance overhead by avoiding the installation of additional software across IT infrastructure.SimplicityAutomation playbooks use straightforward YAML syntax for code that reads like documentation. Ansible is also decentralized, using SSH existing OS credentials to access to remote machines.Scalability and flexibilityEasily and quickly scale the systems you automate through a modular design that supports a large range of operating systems, cloud platforms, and network devices.Idempotence and predictabilityWhen the system is in the state your playbook describes Ansible does not change anything, even if the playbook runs multiple times.




CLI Cheatsheet

https://docs.ansible.com/ansible/latest/command_guide/cheatsheet.html



Circle CI



NGrok - commercial SDN API gateway as a service


ngrok.com-Introducing ngroks Developer-Defined API Gateway Shifting the Paradigm of API Delivery.pdf  link

ngrok.com-Introducing ngroks Developer-Defined API Gateway Shifting the Paradigm of API Delivery.pdf file

https://pinggy.io/blog/best_ngrok_alternatives/

Ngrok is an ingress-as-a-service that provides tunnels facilitating instant ingress to your apps in any cloud, private network, or device. Using Ngrok tunnels you can share your website / app from your localhost. It has many other use cases, such as connecting to IoT devices behind NAT and firewall, receiving webhooks, debugging HTTP requests, and more. Recently, ngrok has expanded its offerings to API gateway, firewall, and load balancing to host on-premise apps and services. Although very mature, Ngrok has its own limitations of being a complex ingress-as-a-service.

Pros of Ngrok

  • Ngrok client is available for Linux, Mac, Windows, and Docker.
  • Authenticated URLs using HTTP Basic Authentication, Oauth 2.0, JWT, Mutual TLS, etc.
  • Request response introspection and replay capability.
  • Supports custom domains.
  • Webhook verification on the fly for popular services such as Twilio, Facebook, etc.
  • Manage tunnels remotely.
  • Advanced features such as custom routing, global load balancing, wildcard custom domains.

Cons of Ngrok

  • Need to sign in to the Ngrok client to use the service.
  • 5GB per month bandwidth cap in the starting paid plan.
  • Requirement of downloading the Ngrok client binary.
  • No UDP tunnels.
  • In the free tier, users visiting an Ngrok url are presented with a Ngrok page first.

Ngrok price

https://ngrok.com/pricing. <<< see updates on pricing here

Ngrok starts at $8 per month for the “Personal” plan, which provides one persistent domain and TCP port. It has a bandwidth cap of 1GB per month. The “Pro” plan, priced at $20 per month, offers features such as IP whitelisting and unlimited webhook verification. It charges $0.10/GB for bandwidth exceeding 1GB per month.

In this article, we will explore the best 10 alternatives of Ngrok in 2024. We will cover the features, installation process, ease of use and pricing of the Ngrok alternatives to help you choose the best one.

  1. Pinggy
  2. LocalXpose
  3. Localtunnel
  4. Zrok
  5. localhost.run
  6. serveo
  7. Tailscale
  8. Cloudflare Tunnel
  9. Pagekite
  10. Playit.gg


1. Pinggy.io

Pinggy.io is a tunneling tool that gives a public address to access your localhost, even while sitting behind a NAT or a firewall – all this in a single command. With this Ngrok alternative, without downloading anything, a single command gives users access to your website / app hosted in localhost without configuring the cloud, or any port forwarding, or DNS, or VPN.

To get how simple it is to open a tunnel, here is an example. If you want to share your React app running on localhost:3000, you can do so using pinggy with the command:

ssh -p 443 -R0:localhost:3000 a.pinggy.io

Pinggy is one of the Ngrok alternatives which you can try out for free without signing up for an account. Over Ngrok, it provides features such as QR codes for tunnel URLs and an HTTP request / response inspection tool within the terminal.


Pros of Pinggy


  • No need to download anything.
  • Provides a terminal user interface with QR codes and request inspector.
  • Built-in web-debugger to monitor, inspect, modify, and replay HTTP requests.
  • Works on Mac / PC / Linux / Docker.
  • Provides TCP tunnels to access IoT devices and custom protocols.
  • Single command handles all configuration as well as authentication.
  • Supports custom domains.
  • HTTP basic authentication and Bearer token authentication.
  • No need to sign up to get test tunnels - just visit https://pinggy.io to get the command.
  • Cheaper than Ngrok.
  • Supports UDP tunnels through the CLI and the desktop App

Cons of Pinggy


  • No OAuth 2.0 authentication for tunnel visitors.
  • No global server load balancing or edge routing.

Price of Pinggy


Pinggy is one of the cheaper Ngrok alternatives. It has a free tier, and the paid tier starts at $2.5 per month. It offes all features including custom domains, persistent TCP ports, live header manipulation, in this plan.


4. Zrok

Zrok is an impressive open source Ngrok alternative that operates on the principles of zero trust networking. Built on top of OpenZiti, a programmable zero trust network overlay, zrok provides users with a secure and efficient way to share resources both publicly and privately.

Users can download zrok from GitHub https://github.com/openziti/zrok/releases/latest. It is one of the best self-hosted alternatives of Ngrok.

zrok screenshotImage Added

Pros of Zrok

  • Open source
  • Self-hosted
  • Private network sharing
  • Built-in file server
  • UDP tunnels

Cons of Zrok

  • Traffic introspection and replay features are not available.
  • Initial setup process is tedious.

Price of Zrok

Zrok is open source and you need to host in a server.


5. localhost.run - simple uses only ssh tunnels

localhost.run is possibly the simplest tunneling tool which is client-less and can instantly make a locally running application available on an internet accessible URL.

Just run the following command to create a tunnel to port 3000:

ssh -R 80:localhost:8080 localhost.run

Pros of localhost.run

  • Simplicity: Localhost.run offers a straightforward and simple setup process. You only need to execute a single command in your terminal to start the tunneling process, making it easy for developers to get started quickly.
  • No installation required: Unlike Ngrok, which requires installation and configuration, localhost.run doesn’t need any software installation. You can use it directly from the command line, which can be convenient, especially for quick testing or prototyping.
  • Free to use: localhost.run offers a free tier, allowing you to use the service without any cost.

Cons of localhost.run

  • Limited features: Compared to Ngrok and other alternatives such as Pinggy, localhost.run may have a significantly more limited set of features. For example, it may not provide advanced functionalities such as custom domains, request inspection, or TCP tunneling.

7. Tailscale

Tailscale is not exactly an Ngrok alternative, rather it is a VPN service. Instead of using a central VPN server employed by traditional VPN services, Tailscale uses a mesh network. The strategy employed by Tailscale prevents centralization whenever feasible. This leads to increased throughput and decreased latency, as machine-to-machine network traffic can move directly. Moreover, opting for decentralization enhances stability and dependability by minimizing instances of singular failure points.

tailscale screenshotImage Added

The Tailscale Funnel represents a functionality enabling the directing of external internet traffic towards one or multiple nodes within your Tailscale network. This can be likened to the act of openly sharing a node, granting accessibility to anyone, irrespective of their possession of Tailscale.

Using Tailscale Funnel you can achive the functionality of Ngrok.

10. Pagekite

Pagekite has been around for more than 14 years now. It offers HTTP(S) and TCP tunnels. It has built-in IP whitelisting, password auth, and other advanced features. The free tier includes 2 GB of transfer quota per month, as well as custom domains.

Although the Pagekite program has to be downloaded and installed to access the service, the tool is entirely open-source and written in Python. So, if you are a developer, feel free to hack away!

Top 5 Open Source Ngrok alternatives

If you are looking for only open source ngrok alternatives, here is a list:

  1. frp
  2. Localtunnel
  3. sshuttle
  4. chisel
  5. bore


Step-by-step guide for Example

...