Table of Contents |
---|
...
...
BYODB - MySQL etc
Azure
...
Cloud Services
...
...
Azure Security Concepts
Azure Security Concepts Intrro
Azure DB security options
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-overview
- Network security
- Access management
- Authorization
- Threat protection
- Information protection and encryption
- Security management
- Next steps
...
- locality of reference on data, libraries within a microservice to a high degree when caching is used
- environment agnostic
- easy to scale as a unit independent of other services in other containers
faas - single function deployed as a serverless service
the server is conceptually "invisible" to the developer
sounds simple until you deal with the
serverless is a work in progress in 2019
The most popular serverless platforms--AWS Lambda, Google Cloud Functions, Azure Functions--all present challenges once data gets involved. Want to talk to local AWS services? Dead simple. But once authenticated APIs get involved, it’s more of a pain. Where do you store tokens? How do you handle OAuth redirects? How do you manage users? Quickly that narrow use of serverless can snowball into a pile of other public cloud services … to the point that you’ve swapped the complexity developers know for some new piles of stuff to learn.
Azure Functions
https://docs.microsoft.com/en-us/azure/azure-functions/
- Where should I host my code? - video
Learn Azure Serverless Function
https://docs.microsoft.com/en-us/learn/modules/create-serverless-logic-with-azure-functions/
Java Azure Function Example
https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-java-maven
o develop functions using Java, you must have the following installed:
- Java Developer Kit, version 8
- Apache Maven, version 3.0 or above
- Azure CLI
- Azure Functions Core Tools version 2.6.666 or above
- An Azure subscription.
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
Create Functions project
In an empty folder, run the following command to generate the Functions project from a Maven archetype.
mvn archetype:generate \
-DarchetypeGroupId=com.microsoft.azure \
-DarchetypeArtifactId=azure-functions-archetype
If you're experiencing issues with running the command, take a look at what maven-archetype-plugin
version is used
Maven asks you for values needed to finish generating the project on deployment. Provide the following values when prompted:
Value | Description |
---|---|
groupId | A value that uniquely identifies your project across all projects, following the package naming rules for Java. The examples in this quickstart use com.fabrikam.functions . |
artifactId | A value that is the name of the jar, without a version number. The examples in this quickstart use fabrikam-functions . |
version | Choose the default value of 1.0-SNAPSHOT . |
package | A value that is the Java package for the generated function code. Use the default. The examples in this quickstart use com.fabrikam.functions . |
appName | Globally unique name that identifies your new function app in Azure. Use the default, which is the artifactId appended with a random number. Make a note of this value, you'll need it later. |
appRegion | Choose a region near you or near other services your functions access. The default is westus . Run this Azure CLI command to get a list of all regions:az account list-locations --query '[].{Name:name}' -o tsv |
resourceGroup | Name for the new resource group in which to create your function app. Use |
Maven creates the project files in a new folder with a name of artifactId, which in this example is fabrikam-functions
.
Open the new Function.java file from the src/main/java path in a text editor and review the generated code. This code is an HTTP triggered function that echoes the body of the request.
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook?tabs=csharp
Run the Function Locally
Run the following command, which changes the directory to the newly created project folder, then builds and runs the function project:
cd fabrikam-function
mvn clean package
mvn azure-functions:run
You see output like the following from Azure Functions Core Tools when you run the project locally:
...
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
Http Functions:
HttpTrigger-Java: [GET,POST] http://localhost:7071/api/HttpTrigger-Java
...
Trigger the function from the command line using cURL in a new terminal window:
curl -w "\n" http://localhost:7071/api/HttpTrigger-Java --data AzureFunctions
Hello AzureFunctions!
The function key isn't required when running locally. Use Ctrl+C
in the terminal to stop the function code.
Deploy the Function to Azure
A function app and related resources are created in Azure when you first deploy your function app. Before you can deploy, use the az login Azure CLI command to sign in to your Azure subscription.
az login
Tip
If your account can access multiple subscriptions, use az account set to set the default subscription for this session.
Use the following Maven command to deploy your project to a new function app.
mvn azure-functions:deploy
This azure-functions:deploy
Maven target creates the following resources in Azure:
...
...
When not to user serverless functions
https://www.serverless.com/blog/when-why-not-use-serverless
https://drive.google.com/file/d/17AMs0HDJIZWFrP-g0jh8WcFlGGIs8GHL/view?usp=sharing
Why serverless functions add value
- it scales with demand automatically
- it significantly reduces server cost (70-90%), because you don’t pay for idle
- it eliminates server maintenance
- it frees up developer resources to take on projects that directly drive business value (versus spending that time on maintenance)
When serverless functions may not be the right choice
- Your Workloads are Constant. ...
- You Fear Vendor Lock-In. ...
- You Need Advanced Monitoring. ...
- You Have Long-Running Functions. ...
- You Use an Unsupported Language.
- You have available unused server capacity
Can serverless functions be portable across platforms?
- use a standard language
- use a docker container
Then the serverless function can be redefined on another platform using docker
faas - single function deployed as a serverless service
the server is conceptually "invisible" to the developer
sounds simple until you deal with the
serverless is a work in progress in 2019
The most popular serverless platforms--AWS Lambda, Google Cloud Functions, Azure Functions--all present challenges once data gets involved. Want to talk to local AWS services? Dead simple. But once authenticated APIs get involved, it’s more of a pain. Where do you store tokens? How do you handle OAuth redirects? How do you manage users? Quickly that narrow use of serverless can snowball into a pile of other public cloud services … to the point that you’ve swapped the complexity developers know for some new piles of stuff to learn.
Azure Functions
https://docs.microsoft.com/en-us/azure/azure-functions/
- Where should I host my code? - video
Learn Azure Serverless Function
https://docs.microsoft.com/en-us/learn/modules/create-serverless-logic-with-azure-functions/
Java Azure Function Example
https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-java-maven
o develop functions using Java, you must have the following installed:
- Java Developer Kit, version 8
- Apache Maven, version 3.0 or above
- Azure CLI
- Azure Functions Core Tools version 2.6.666 or above
- An Azure subscription.
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
Create Functions project
In an empty folder, run the following command to generate the Functions project from a Maven archetype.
mvn archetype:generate \
-DarchetypeGroupId=com.microsoft.azure \
-DarchetypeArtifactId=azure-functions-archetype
If you're experiencing issues with running the command, take a look at what maven-archetype-plugin
version is used
Maven asks you for values needed to finish generating the project on deployment. Provide the following values when prompted:
Value | Description |
---|---|
groupId | A value that uniquely identifies your project across all projects, following the package naming rules for Java. The examples in this quickstart use com.fabrikam.functions . |
artifactId | A value that is the name of the jar, without a version number. The examples in this quickstart use fabrikam-functions . |
version | Choose the default value of 1.0-SNAPSHOT . |
package | A value that is the Java package for the generated function code. Use the default. The examples in this quickstart use com.fabrikam.functions . |
appName | Globally unique name that identifies your new function app in Azure. Use the default, which is the artifactId appended with a random number. Make a note of this value, you'll need it later. |
appRegion | Choose a region near you or near other services your functions access. The default is westus . Run this Azure CLI command to get a list of all regions:az account list-locations --query '[].{Name:name}' -o tsv |
resourceGroup | Name for the new resource group in which to create your function app. Use |
Maven creates the project files in a new folder with a name of artifactId, which in this example is fabrikam-functions
.
Open the new Function.java file from the src/main/java path in a text editor and review the generated code. This code is an HTTP triggered function that echoes the body of the request.
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook?tabs=csharp
Run the Function Locally
Run the following command, which changes the directory to the newly created project folder, then builds and runs the function project:
cd fabrikam-function
mvn clean package
mvn azure-functions:run
You see output like the following from Azure Functions Core Tools when you run the project locally:
...
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
Http Functions:
HttpTrigger-Java: [GET,POST] http://localhost:7071/api/HttpTrigger-Java
...
Trigger the function from the command line using cURL in a new terminal window:
curl -w "\n" http://localhost:7071/api/HttpTrigger-Java --data AzureFunctions
Hello AzureFunctions!
The function key isn't required when running locally. Use Ctrl+C
in the terminal to stop the function code.
Deploy the Function to Azure
A function app and related resources are created in Azure when you first deploy your function app. Before you can deploy, use the az login Azure CLI command to sign in to your Azure subscription.
az login
Tip
If your account can access multiple subscriptions, use az account set to set the default subscription for this session.
Use the following Maven command to deploy your project to a new function app.
mvn azure-functions:deploy
This azure-functions:deploy
Maven target creates the following resources in Azure:
- Resource group. Named with the resourceGroup you supplied.
- Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
- App service plan. Serverless hosting for your function app in the specified appRegion. The name is generated randomly.
- Function app. A function app is the deployment and execution unit for your functions. The name is your appName, appended with a randomly generated number.
...
- setup RBAC controls
- set VM resource policies to provide resources, manage costs
- hierarchy
- resources < resource
- hierarchy
- resources < resource groups < subscriptions < management groups
- sysprep.exe to remove personal info from VM config
- monitor VM changes
- update VMs
- Security Center - setup and manage security policies and events
- Install apps - can install mult in single VM ( eg SQL, .Net, IIS ) if needed
- secure web server with SSL certs in MS key vault
- more
Azure Container setups
Docker on Azure
Docker Jenkins Build Templates
Azure Arc - Orchestration Service for Kubernetes on multiple platforms
https://docs.microsoft.com/en-us/azure/azure-arc/
Azure Arc extends Azure Resource Manager capabilities to Linux and Windows servers, as well as Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and edge. With Azure Arc, customers can also run Azure data services anywhere, realizing the benefits of cloud innovation, including always up-to-date data capabilities, deployment in seconds (rather than hours), and dynamic scalability on any infrastructure. Azure Arc for servers is currently in public preview.
Arc Overview
- groups < subscriptions < management groups
- sysprep.exe to remove personal info from VM config
- monitor VM changes
- update VMs
- Security Center - setup and manage security policies and events
- Install apps - can install mult in single VM ( eg SQL, .Net, IIS ) if needed
- secure web server with SSL certs in MS key vault
- more
Azure Container setups
Docker on Azure
Docker Jenkins Build Templates
Azure Arc - Orchestration Service for Kubernetes on multiple platforms
https://docs.microsoft.com/en-us/azure/azure-arc/
Azure Arc extends Azure Resource Manager capabilities to Linux and Windows servers, as well as Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and edge. With Azure Arc, customers can also run Azure data services anywhere, realizing the benefits of cloud innovation, including always up-to-date data capabilities, deployment in seconds (rather than hours), and dynamic scalability on any infrastructure. Azure Arc for servers is currently in public preview.
Arc Overview
https://docs.microsoft.com/en-us/azure/azure-arc/servers/overview
Azure Arc for servers (preview) allows you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud provider, similarly to how you manage native Azure virtual machines. When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID, is managed as part of a resource group inside a subscription, and benefits from standard Azure constructs such as Azure Policy and applying tags.
To deliver this experience with your hybrid machines hosted outside of Azure, the Azure Connected Machine agent needs to be installed on each machine that you plan on connecting to Azure. This agent does not deliver any other functionality, and it doesn't replace the Azure Log Analytics agent. The Log Analytics agent for Windows and Linux is required when you want to proactively monitor the OS and workloads running on the machine, manage it using Automation runbooks or solutions like Update Management, or use other Azure services like Azure Security Center.
Guides for Arc
- Connect machines to Arc through Azure Portal
- Connect machines optionally using a Service Principal for auto-scaling
- Connect machines using PowerShell DSC ( Desired State Configuration )
- Manage Agents
Azure Arc Policy Samples
https://docs.microsoft.com/en-us/azure/azure-arc/servers/policy-samples
Audit, Monitoring and Deployment policies for VMs
A Closer Look At Azure Arc – Microsoft’s Hybrid And Multi-Cloud Platform
Arc Agent on each machine or node
The Connected Machine agent sends a regular heartbeat message to the service every 5 minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline and the status will automatically be changed to Disconnected in the portal within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed to Connected.
https://docs.microsoft.com/en-us/azure/azure-arc
...
...
Azure Arc for servers (preview) allows you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud provider, similarly to how you manage native Azure virtual machines. When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID, is managed as part of a resource group inside a subscription, and benefits from standard Azure constructs such as Azure Policy and applying tags.
To deliver this experience with your hybrid machines hosted outside of Azure, the Azure Connected Machine agent needs to be installed on each machine that you plan on connecting to Azure. This agent does not deliver any other functionality, and it doesn't replace the Azure Log Analytics agent. The Log Analytics agent for Windows and Linux is required when you want to proactively monitor the OS and workloads running on the machine, manage it using Automation runbooks or solutions like Update Management, or use other Azure services like Azure Security Center.
Guides for Arc
- Connect machines to Arc through Azure Portal
- Connect machines optionally using a Service Principal for auto-scaling
- Connect machines using PowerShell DSC ( Desired State Configuration )
- Manage Agents
Azure Arc Policy Samples
https://docs.microsoft.com/en-us/azure/azure-arc/servers/policy-samples
Audit, Monitoring and Deployment policies for VMs
A Closer Look At Azure Arc – Microsoft’s Hybrid And Multi-Cloud Platform
Arc Agent on each machine or node
The Connected Machine agent sends a regular heartbeat message to the service every 5 minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline and the status will automatically be changed to Disconnected in the portal within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed to Connected.
https://docs.microsoft.com/en-us/azure/azure-arc/servers/agent-overview
Azure Arc delivers three capabilities - managing VMs running outside of Azure, registering and managing Kubernetes clusters deployed within and outside of Azure and running managed data services based on Azure SQL and PostgreSQL Hyperscale in Kubernetes clusters registered with Azure Arc.
As of Build 2020, Microsoft has opened up the first two features of Azure Arc - management of VMs and Kubernetes clusters running outside of Azure. Azure Arc enabled data services is still in private preview.
Adding machines to a group and defining in policies
The Connected Machine agent can be deployed in a variety of OS environments including Windows Server 2012 R2 or higher, Ubuntu 16.04, SUSE Linux Enterprise Server 15, Red Hat Enterprise Linux 7, and even Amazon Linux 2.
The registered machines are listed in the same Azure resource group that has native Azure VMs running in the public cloud. Customers can apply labels to any VM in the resource group to include or exclude them in a policy. Participating machines can be audited by an Azure Policy and an action can be taken based on the outcome.
Arc can manage Kubernetes Clusters
Similar to how VMs can be onboarded to Azure, Kubernetes clusters can be brought into the fold of Azure Arc.
Customers can attach Kubernetes clusters running anywhere outside of Azure to Azure Arc. This includes bare-metal clusters running on-premises, managed clusters such as Amazon EKS and Google Kubernetes Engine, and enterprise PaaS offerings such as Red Hat OpenShift and Tanzu Kubernetes Grid.
Similar to the Connected Machine agent pushed to a VM, Azure Arc deploys an agent under the azure-arc namespace. It does exactly what the VM agent does - watch for configuration requests. Apart from that, the Arc agent running in a Kubernetes cluster can send telemetry to Azure Monitor. The telemetry includes inventory, Kubernetes events, container std{out; err} logs, and node, container, Kubelet, and GPU performance metrics.
Once the agent is deployed in a Kubernetes cluster, it can participate in the GitOps-based configuration management and policy updates
Azure Arc-enabled Kubernetes ensures that the workloads match the desired state of the configuration by monitoring the drift and automatically applying the required changes.
Azure Arc-enabled Kubernetes comes with three capabilities:
Global inventory management - You can onboard all the Kubernetes clusters irrespective of their deployment location to manage them from a single location.
Centralized workload management - With Azure Arc, it is possible to roll out applications and configuration to hundreds of registered clusters with one commit to the source code repository.
Policy-driven cluster management - Ensure that the cluster runs the policies by centrally governing and auditing the infrastructure.
Microsoft has partnered with Red Hat, SUSE, and Rancher to officially bring OpenShift, SUSE CaaS and Rancher Kubernetes Engine to Azure Arc.
Microsoft scores additional points for adopting the open source Flux project as the choice of GitOps tool for Azure Arc. It brings transparency to the platform while providing confidence to users.
Azure Arc for Data Services in K8s
With Azure Arc for data services, customers will benefit from the ability to run managed database services in any Kubernetes cluster managed by Azure Arc. This capability will emerge as the key differentiating feature of Azure Arc.
Microsoft DLT service
...
Azure Arc delivers three capabilities - managing VMs running outside of Azure, registering and managing Kubernetes clusters deployed within and outside of Azure and running managed data services based on Azure SQL and PostgreSQL Hyperscale in Kubernetes clusters registered with Azure Arc.
As of Build 2020, Microsoft has opened up the first two features of Azure Arc - management of VMs and Kubernetes clusters running outside of Azure. Azure Arc enabled data services is still in private preview.
Adding machines to a group and defining in policies
The Connected Machine agent can be deployed in a variety of OS environments including Windows Server 2012 R2 or higher, Ubuntu 16.04, SUSE Linux Enterprise Server 15, Red Hat Enterprise Linux 7, and even Amazon Linux 2.
The registered machines are listed in the same Azure resource group that has native Azure VMs running in the public cloud. Customers can apply labels to any VM in the resource group to include or exclude them in a policy. Participating machines can be audited by an Azure Policy and an action can be taken based on the outcome.
Arc can manage Kubernetes Clusters
Similar to how VMs can be onboarded to Azure, Kubernetes clusters can be brought into the fold of Azure Arc.
Customers can attach Kubernetes clusters running anywhere outside of Azure to Azure Arc. This includes bare-metal clusters running on-premises, managed clusters such as Amazon EKS and Google Kubernetes Engine, and enterprise PaaS offerings such as Red Hat OpenShift and Tanzu Kubernetes Grid.
Similar to the Connected Machine agent pushed to a VM, Azure Arc deploys an agent under the azure-arc namespace. It does exactly what the VM agent does - watch for configuration requests. Apart from that, the Arc agent running in a Kubernetes cluster can send telemetry to Azure Monitor. The telemetry includes inventory, Kubernetes events, container std{out; err} logs, and node, container, Kubelet, and GPU performance metrics.
Once the agent is deployed in a Kubernetes cluster, it can participate in the GitOps-based configuration management and policy updates
Azure Arc-enabled Kubernetes ensures that the workloads match the desired state of the configuration by monitoring the drift and automatically applying the required changes.
Azure Arc-enabled Kubernetes comes with three capabilities:
Global inventory management - You can onboard all the Kubernetes clusters irrespective of their deployment location to manage them from a single location.
Centralized workload management - With Azure Arc, it is possible to roll out applications and configuration to hundreds of registered clusters with one commit to the source code repository.
Policy-driven cluster management - Ensure that the cluster runs the policies by centrally governing and auditing the infrastructure.
Microsoft has partnered with Red Hat, SUSE, and Rancher to officially bring OpenShift, SUSE CaaS and Rancher Kubernetes Engine to Azure Arc.
Microsoft scores additional points for adopting the open source Flux project as the choice of GitOps tool for Azure Arc. It brings transparency to the platform while providing confidence to users.
Azure Arc for Data Services in K8s
With Azure Arc for data services, customers will benefit from the ability to run managed database services in any Kubernetes cluster managed by Azure Arc. This capability will emerge as the key differentiating feature of Azure Arc.
Microsoft DLT service
Managed Fabric Net on Azure
Microsoft Fabric vs. Azure Synapse Analytics: Architecture, Features, Migration Possibilities, FAQs
Microsoft Fabric is a SaaS offering that aims to be a one-stop shop for all of your data engineering, science, analytics, and BI needs. Meanwhile, Azure Synapse Analytics is a PaaS that supports data warehousing, integration, and analytics use cases.
Fabric is seen as a successor to Azure Synapse, however, there are several gaps and differences in terms of architecture and capabilities.
In this article, we’ll explore these differences between Microsoft Fabric and Azure Synapse Analytics, while addressing the most frequently asked questions about the two solutions.
.
Custom Fabric Net on Azure
...