m Azure Cloud
Key Points
- azure offers free, low-end account
- get office 365 for compatibility or use Libre Office - draw.io
- test Hyperledger Fabric on azure linux instance
References
pSyn$jm9gz
Free Azure Account
https://azure.microsoft.com/en-us/free/free-account-faq/
What is the Azure free account? The Azure free account includes free access to our most popular Azure products for 12 months, $200 credit to spend for the first 30 days of sign up, and access to more than 25 products that are always free.
see details on all the options for the first 12 months
https://drive.google.com/open?id=1TJwy5S4u9HKt9IG5DEtlCxNPH2MXA4CR
What does Azure really cost?
AWS Lightsail is < 50% of the cost of Azure for a Linux instance
https://aws.amazon.com/lightsail/pricing/
Key Concepts
Azure Fundamentals
Bryan Cafferky on Azure Fundamentals - Data Services and more - Youtube playlists
play and play along in Azure
Bryan Cafferky playlists for Azure software lessons
https://www.youtube.com/user/Bryancutube256123/playlists
Create sample SQL Server DB
https://www.youtube.com/watch?v=VnU5-erCIC0
Azure Certifications
https://linuxacademy.com/blog/certifications/azure-certifications-and-roadmap/
And there currently are EIGHT Azure-based certifications spread across these three levels. All of these are new certifications, not refreshes of previous Azure certifications:
- Microsoft Certified Azure Fundamentals (Fundamentals)
- Microsoft Certified Azure Administrator (Associate)
- Microsoft Certified Azure Developer (Associate)
- Microsoft Certified Azure AI Engineer Associate (Associate)
- Microsoft Certified Azure Data Engineer Associate (Associate)
- Microsoft Certified Azure Security Technologies (Associate)
- Microsoft Certified Azure Solutions Architect (Expert)
- Microsoft Certified Azure DevOps (Expert)
Free Microsoft online Azure Fundamentals Course
https://docs.microsoft.com/en-us/learn/certifications/azure-fundamentals
https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
Azure Active Directory B2C identity mgt
https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview
Azure AD B2C is a white-label authentication solution. You can customize the entire user experience with your brand so that it blends seamlessly with your web and mobile applications.
Customize every page displayed by Azure AD B2C when your users sign up, sign in, and modify their profile information. Customize the HTML, CSS, and JavaScript in your user journeys so that the Azure AD B2C experience looks and feels like it's a native part of your application.
Azure AD Integration with External Identities
https://azure.microsoft.com/en-us/services/active-directory/external-identities/b2c/
Apply security controls and application- or policy-based multi-factor authentication to help protect your customers’ personal data.
Using External Identities
https://azure.microsoft.com/en-us/services/active-directory/external-identities/
Build an identity experience that works for any user, using any identity, on any device. Make it easy for customers and partners to sign up and sign in using their existing social media ID, phone number, or credentials from any standards-based identity provider.
Microsoft Azure DB options
Cosmos DB
https://docs.microsoft.com/en-us/azure/cosmos-db/introduction
Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. With a click of a button, Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure regions worldwide. You can elastically scale throughput and storage, and take advantage of fast, single-digit-millisecond data access using your favorite API including SQL, MongoDB, Cassandra, Tables, or Gremlin
for .Net, Java, Node.js, Python clients
SQL DB
https://docs.microsoft.com/en-us/azure/sql-database/
Warning - Most Transact-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure SQL Database.
Purchasing model | Description | Best for |
---|---|---|
DTU-based model | This model is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see What are DTUs and eDTUs?. | Best for customers who want simple, preconfigured resource options. |
vCore-based model | This model allows you to independently choose compute and storage resources. The vCore-based purchasing model also allows you to use Azure Hybrid Benefit for SQL Server to gain cost savings. | Best for customers who value flexibility, control, and transparency. |
Compute costs
Provisioned compute costs
In the provisioned compute tier, the compute cost reflects the total compute capacity that is provisioned for the application.
In the business critical service tier, we automatically allocate at least 3 replicas. To reflect this additional allocation of compute resources, the price in the vCore-based purchasing model is approximately 2.7x higher in the business critical service tier than it is in the general purpose service tier. Likewise, the higher storage price per GB in the business critical service tier reflects the high I/O and low latency of the SSD storage.
The cost of backup storage is the same for the business critical service tier and the general purpose service tier because both tiers use standard storage.
Serverless compute costs
For a description of how compute capacity is defined and costs are calculated for the serverless compute tier, see SQL Database serverless.
Storage costs
Different types of storage are billed differently. For data storage, you're charged for the provisioned storage based upon the maximum database or pool size you select. The cost doesn't change unless you reduce or increase that maximum. Backup storage is associated with automated backups of your instance and is allocated dynamically. Increasing your backup-retention period increases the backup storage that’s consumed by your instance.
By default, 7 days of automated backups of your databases are copied to a read-access geo-redundant storage (RA-GRS) standard Blob storage account. This storage is used by weekly full backups, daily differential backups, and transaction log backups, which are copied every 5 minutes. The size of the transaction logs depends on the rate of change of the database. A minimum storage amount equal to 100 percent of the database size is provided at no extra charge. Additional consumption of backup storage is charged in GB per month.
For more information about storage prices, see the pricing page.
vCore-based purchasing model
A virtual core (vCore) represents a logical CPU and offers you the option to choose between generations of hardware and the physical characteristics of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based purchasing model gives you flexibility, control, transparency of individual resource consumption, and a straightforward way to translate on-premises workload requirements to the cloud. This model allows you to choose compute, memory, and storage resources based upon your workload needs.
In the vCore-based purchasing model, you can choose between the general purpose and business critical service tiers for single databases, elastic pools, and managed instances. For single databases, you can also choose the hyperscale service tier.
The vCore-based purchasing model lets you independently choose compute and storage resources, match on-premises performance, and optimize price. In the vCore-based purchasing model, you pay for:
- Compute resources (the service tier + the number of vCores and the amount of memory + the generation of hardware).
- The type and amount of data and log storage.
- Backup storage (RA-GRS).
Important
Compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per each database. For more information about managed instance charges, see managed instances. Region limitations: For the current list of supported regions, see products available by region. To create a managed instance in a region that currently isn't supported, send a support request via the Azure portal.
If your single database or elastic pool consumes more than 300 DTUs, converting to the vCore-based purchasing model might reduce your costs. You can convert by using your API of choice or by using the Azure portal, with no downtime. However, conversion isn't required and isn't done automatically. If the DTU-based purchasing model meets your performance and business requirements, you should continue using it.
To convert from the DTU-based purchasing model to the vCore-based purchasing model, select the compute size by using the following rules of thumb:
- Every 100 DTUs in the standard tier require at least 1 vCore in the general purpose service tier.
- Every 125 DTUs in the premium tier require at least 1 vCore in the business critical service tier.
DTU-based purchasing model
A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. The DTU-based purchasing model offers a set of preconfigured bundles of compute resources and included storage to drive different levels of application performance. If you prefer the simplicity of a preconfigured bundle and fixed payments each month, the DTU-based model might be more suitable for your needs.
In the DTU-based purchasing model, you can choose between the basic, standard, and premium service tiers for both single databases and elastic pools. The DTU-based purchasing model isn't available for managed instances.
Database transaction units (DTUs)
For a single database at a specific compute size within a service tier, Microsoft guarantees a certain level of resources for that database (independent of any other database in the Azure cloud). This guarantee provides a predictable level of performance. The amount of resources allocated for a database is calculated as a number of DTUs and is a bundled measure of compute, storage, and I/O resources.
The ratio among these resources is originally determined by an online transaction processing (OLTP) benchmark workload designed to be typical of real-world OLTP workloads. When your workload exceeds the amount of any of these resources, your throughput is throttled, resulting in slower performance and time-outs.
The resources used by your workload don't impact the resources available to other SQL databases in the Azure cloud. Likewise, the resources used by other workloads don't impact the resources available to your SQL database.
DTUs are most useful for understanding the relative resources that are allocated for Azure SQL databases at different compute sizes and service tiers. For example:
- Doubling the DTUs by increasing the compute size of a database equates to doubling the set of resources available to that database.
- A premium service tier P11 database with 1750 DTUs provides 350x more DTU compute power than a basic service tier database with 5 DTUs.
To gain deeper insight into the resource (DTU) consumption of your workload, use query-performance insights to:
SQL DB service purchase options
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-purchase-models
Single database is one of three deployment options for Azure SQL Database. The other two are elastic pools and managed instance.
Single Database option
The single database deployment option creates a database in Azure SQL Database with its own set of resources and is managed via a SQL Database server. With a single database, each database is isolated from each other and portable, each with its own service tier within the DTU-based purchasing model or vCore-based purchasing model and a guaranteed compute size.
Elastic Pools DB option
A single database can be moved into or out of an elastic pool for resource sharing. For many businesses and applications, being able to create single databases and dial performance up or down on demand is enough, especially if usage patterns are relatively predictable. But if you have unpredictable usage patterns, it can make it hard to manage costs and your business model. Elastic pools are designed to solve this problem.
Managed Instance DB option
SQL documentation
BYODB - MySQL etc
Azure Cloud Services
https://azure.microsoft.com/en-us
Azure Security Concepts
Azure Security Concepts Intrro
Azure DB security options
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-overview
- Network security
- Access management
- Authorization
- Threat protection
- Information protection and encryption
- Security management
- Next steps
Learn Azure Messaging and Serverless applications
https://docs.microsoft.com/en-us/learn/paths/architect-messaging-serverless/
Serverless function concepts
Containers like Docker provide significant environment isolation and flexibility.
An app in a Docker container only talks to the Docker engine and the configured ports.
It has no idea of the environment or OS it runs in.
Deploying microservices in containers provides major benefits for most use cases:
- locality of reference on data, libraries within a microservice to a high degree when caching is used
- environment agnostic
- easy to scale as a unit independent of other services in other containers
When not to user serverless functions
https://www.serverless.com/blog/when-why-not-use-serverless
https://drive.google.com/file/d/17AMs0HDJIZWFrP-g0jh8WcFlGGIs8GHL/view?usp=sharing
Why serverless functions add value
- it scales with demand automatically
- it significantly reduces server cost (70-90%), because you don’t pay for idle
- it eliminates server maintenance
- it frees up developer resources to take on projects that directly drive business value (versus spending that time on maintenance)
When serverless functions may not be the right choice
- Your Workloads are Constant. ...
- You Fear Vendor Lock-In. ...
- You Need Advanced Monitoring. ...
- You Have Long-Running Functions. ...
- You Use an Unsupported Language.
- You have available unused server capacity
Can serverless functions be portable across platforms?
- use a standard language
- use a docker container
Then the serverless function can be redefined on another platform using docker
faas - single function deployed as a serverless service
the server is conceptually "invisible" to the developer
sounds simple until you deal with the
serverless is a work in progress in 2019
The most popular serverless platforms--AWS Lambda, Google Cloud Functions, Azure Functions--all present challenges once data gets involved. Want to talk to local AWS services? Dead simple. But once authenticated APIs get involved, it’s more of a pain. Where do you store tokens? How do you handle OAuth redirects? How do you manage users? Quickly that narrow use of serverless can snowball into a pile of other public cloud services … to the point that you’ve swapped the complexity developers know for some new piles of stuff to learn.
Azure Functions
https://docs.microsoft.com/en-us/azure/azure-functions/
- Where should I host my code? - video
Learn Azure Serverless Function
https://docs.microsoft.com/en-us/learn/modules/create-serverless-logic-with-azure-functions/
Java Azure Function Example
https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-java-maven
o develop functions using Java, you must have the following installed:
- Java Developer Kit, version 8
- Apache Maven, version 3.0 or above
- Azure CLI
- Azure Functions Core Tools version 2.6.666 or above
- An Azure subscription.
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
Create Functions project
In an empty folder, run the following command to generate the Functions project from a Maven archetype.
mvn archetype:generate \
-DarchetypeGroupId=com.microsoft.azure \
-DarchetypeArtifactId=azure-functions-archetype
If you're experiencing issues with running the command, take a look at what maven-archetype-plugin
version is used
Maven asks you for values needed to finish generating the project on deployment. Provide the following values when prompted:
Value | Description |
---|---|
groupId | A value that uniquely identifies your project across all projects, following the package naming rules for Java. The examples in this quickstart use com.fabrikam.functions . |
artifactId | A value that is the name of the jar, without a version number. The examples in this quickstart use fabrikam-functions . |
version | Choose the default value of 1.0-SNAPSHOT . |
package | A value that is the Java package for the generated function code. Use the default. The examples in this quickstart use com.fabrikam.functions . |
appName | Globally unique name that identifies your new function app in Azure. Use the default, which is the artifactId appended with a random number. Make a note of this value, you'll need it later. |
appRegion | Choose a region near you or near other services your functions access. The default is westus . Run this Azure CLI command to get a list of all regions:az account list-locations --query '[].{Name:name}' -o tsv |
resourceGroup | Name for the new resource group in which to create your function app. Use |
Maven creates the project files in a new folder with a name of artifactId, which in this example is fabrikam-functions
.
Open the new Function.java file from the src/main/java path in a text editor and review the generated code. This code is an HTTP triggered function that echoes the body of the request.
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook?tabs=csharp
Run the Function Locally
Run the following command, which changes the directory to the newly created project folder, then builds and runs the function project:
cd fabrikam-function
mvn clean package
mvn azure-functions:run
You see output like the following from Azure Functions Core Tools when you run the project locally:
...
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
Http Functions:
HttpTrigger-Java: [GET,POST] http://localhost:7071/api/HttpTrigger-Java
...
Trigger the function from the command line using cURL in a new terminal window:
curl -w "\n" http://localhost:7071/api/HttpTrigger-Java --data AzureFunctions
Hello AzureFunctions!
The function key isn't required when running locally. Use Ctrl+C
in the terminal to stop the function code.
Deploy the Function to Azure
A function app and related resources are created in Azure when you first deploy your function app. Before you can deploy, use the az login Azure CLI command to sign in to your Azure subscription.
az login
Tip
If your account can access multiple subscriptions, use az account set to set the default subscription for this session.
Use the following Maven command to deploy your project to a new function app.
mvn azure-functions:deploy
This azure-functions:deploy
Maven target creates the following resources in Azure:
- Resource group. Named with the resourceGroup you supplied.
- Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
- App service plan. Serverless hosting for your function app in the specified appRegion. The name is generated randomly.
- Function app. A function app is the deployment and execution unit for your functions. The name is your appName, appended with a randomly generated number.
The deployment also packages the project files and deploys them to the new function app using zip deployment, with run-from-package mode enabled.
After the deployment completes, you see the URL you can use to access your function app endpoints. Because the HTTP trigger we published uses authLevel = AuthorizationLevel.FUNCTION
, you need to get the function key to call the function endpoint over HTTP. The easiest way to get the function key is from the Azure portal.
Get the HTTP Trigger URL
You can get the URL required to the trigger your function, with the function key, from the Azure portal.
Browse to the Azure portal, sign in, type the appName of your function app into Search at the top of the page, and press enter.
In your function app, expand Functions (Read Only), choose your function, then select </> Get function URL at the top right.
Choose default (Function key) and select Copy.
You can now use the copied URL to access your function.
Verify the Function in Azure
To verify the function app running on Azure using cURL
, replace the URL from the sample below with the URL that you copied from the portal.
curl -w "\n" https://fabrikam-functions-20190929094703749.azurewebsites.net/api/HttpTrigger-Java?code=zYRohsTwBlZ68YF.... --data AzureFunctions
This sends a POST request to the function endpoint with AzureFunctions
in the body of the request. You see the following response.
Hello AzureFunctions!
Azure VM setups
create
- config files
- load balancer
- virtual nic
- vms - AzAvailabilitySet
- Get-Credential to set up admin id / pwd
- New-AzVM - select the right images type - windows, ubuntu or ?
- create NSG - network security group to manage traffic in and out of subnet ( see below )
- more
Azure VM management
- setup RBAC controls
- set VM resource policies to provide resources, manage costs
- hierarchy
- resources < resource groups < subscriptions < management groups
- sysprep.exe to remove personal info from VM config
- monitor VM changes
- update VMs
- Security Center - setup and manage security policies and events
- Install apps - can install mult in single VM ( eg SQL, .Net, IIS ) if needed
- secure web server with SSL certs in MS key vault
- more
Azure Container setups
Docker on Azure
Docker Jenkins Build Templates
Azure Arc - Orchestration Service for Kubernetes on multiple platforms
https://docs.microsoft.com/en-us/azure/azure-arc/
Azure Arc extends Azure Resource Manager capabilities to Linux and Windows servers, as well as Kubernetes clusters on any infrastructure across on-premises, multi-cloud, and edge. With Azure Arc, customers can also run Azure data services anywhere, realizing the benefits of cloud innovation, including always up-to-date data capabilities, deployment in seconds (rather than hours), and dynamic scalability on any infrastructure. Azure Arc for servers is currently in public preview.
Arc Overview
https://docs.microsoft.com/en-us/azure/azure-arc/servers/overview
Azure Arc for servers (preview) allows you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud provider, similarly to how you manage native Azure virtual machines. When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID, is managed as part of a resource group inside a subscription, and benefits from standard Azure constructs such as Azure Policy and applying tags.
To deliver this experience with your hybrid machines hosted outside of Azure, the Azure Connected Machine agent needs to be installed on each machine that you plan on connecting to Azure. This agent does not deliver any other functionality, and it doesn't replace the Azure Log Analytics agent. The Log Analytics agent for Windows and Linux is required when you want to proactively monitor the OS and workloads running on the machine, manage it using Automation runbooks or solutions like Update Management, or use other Azure services like Azure Security Center.
Guides for Arc
- Connect machines to Arc through Azure Portal
- Connect machines optionally using a Service Principal for auto-scaling
- Connect machines using PowerShell DSC ( Desired State Configuration )
- Manage Agents
Azure Arc Policy Samples
https://docs.microsoft.com/en-us/azure/azure-arc/servers/policy-samples
Audit, Monitoring and Deployment policies for VMs
A Closer Look At Azure Arc – Microsoft’s Hybrid And Multi-Cloud Platform
Arc Agent on each machine or node
The Connected Machine agent sends a regular heartbeat message to the service every 5 minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline and the status will automatically be changed to Disconnected in the portal within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed to Connected.
https://docs.microsoft.com/en-us/azure/azure-arc/servers/agent-overview
Azure Arc delivers three capabilities - managing VMs running outside of Azure, registering and managing Kubernetes clusters deployed within and outside of Azure and running managed data services based on Azure SQL and PostgreSQL Hyperscale in Kubernetes clusters registered with Azure Arc.
As of Build 2020, Microsoft has opened up the first two features of Azure Arc - management of VMs and Kubernetes clusters running outside of Azure. Azure Arc enabled data services is still in private preview.
Adding machines to a group and defining in policies
The Connected Machine agent can be deployed in a variety of OS environments including Windows Server 2012 R2 or higher, Ubuntu 16.04, SUSE Linux Enterprise Server 15, Red Hat Enterprise Linux 7, and even Amazon Linux 2.
The registered machines are listed in the same Azure resource group that has native Azure VMs running in the public cloud. Customers can apply labels to any VM in the resource group to include or exclude them in a policy. Participating machines can be audited by an Azure Policy and an action can be taken based on the outcome.
Arc can manage Kubernetes Clusters
Similar to how VMs can be onboarded to Azure, Kubernetes clusters can be brought into the fold of Azure Arc.
Customers can attach Kubernetes clusters running anywhere outside of Azure to Azure Arc. This includes bare-metal clusters running on-premises, managed clusters such as Amazon EKS and Google Kubernetes Engine, and enterprise PaaS offerings such as Red Hat OpenShift and Tanzu Kubernetes Grid.
Similar to the Connected Machine agent pushed to a VM, Azure Arc deploys an agent under the azure-arc namespace. It does exactly what the VM agent does - watch for configuration requests. Apart from that, the Arc agent running in a Kubernetes cluster can send telemetry to Azure Monitor. The telemetry includes inventory, Kubernetes events, container std{out; err} logs, and node, container, Kubelet, and GPU performance metrics.
Once the agent is deployed in a Kubernetes cluster, it can participate in the GitOps-based configuration management and policy updates
Azure Arc-enabled Kubernetes ensures that the workloads match the desired state of the configuration by monitoring the drift and automatically applying the required changes.
Azure Arc-enabled Kubernetes comes with three capabilities:
Global inventory management - You can onboard all the Kubernetes clusters irrespective of their deployment location to manage them from a single location.
Centralized workload management - With Azure Arc, it is possible to roll out applications and configuration to hundreds of registered clusters with one commit to the source code repository.
Policy-driven cluster management - Ensure that the cluster runs the policies by centrally governing and auditing the infrastructure.
Microsoft has partnered with Red Hat, SUSE, and Rancher to officially bring OpenShift, SUSE CaaS and Rancher Kubernetes Engine to Azure Arc.
Microsoft scores additional points for adopting the open source Flux project as the choice of GitOps tool for Azure Arc. It brings transparency to the platform while providing confidence to users.
Azure Arc for Data Services in K8s
With Azure Arc for data services, customers will benefit from the ability to run managed database services in any Kubernetes cluster managed by Azure Arc. This capability will emerge as the key differentiating feature of Azure Arc.
Microsoft DLT service
Managed Fabric Net on Azure
Microsoft Fabric vs. Azure Synapse Analytics: Architecture, Features, Migration Possibilities, FAQs
Microsoft Fabric is a SaaS offering that aims to be a one-stop shop for all of your data engineering, science, analytics, and BI needs. Meanwhile, Azure Synapse Analytics is a PaaS that supports data warehousing, integration, and analytics use cases.
Fabric is seen as a successor to Azure Synapse, however, there are several gaps and differences in terms of architecture and capabilities.
In this article, we’ll explore these differences between Microsoft Fabric and Azure Synapse Analytics, while addressing the most frequently asked questions about the two solutions.
.
Custom Fabric Net on Azure
Potential Value Opportunities
Potential Challenges
Candidate Solutions
Step-by-step guide for Example
sample code block