Service Fabric in Azure

Service Fabric is a distributed systems platform used to build hyper-scale, agile and fault-tolerant microservices on the cloud. It provides set of services for orchestrating the functioning of the applications deployed on the cluster. It abstracts complexities around provisioning, deploying, fault handling, scaling and optimizing the applications that are deployed to the cluster.

It is responsible for fault handling and recovery of services, should something fail. Service Fabric plays the same role as some of the other microservice orchestration platforms such as Docker Swarm, Kubernetes, Mesosphere, Core OS etc.,

Service Fabric supports quite a few programming models to make it easy to develop variety of microservices. Each service can be either stateful or stateless. Stateless services are typically used for creating Web APIs or any other service that doesn’t need to maintain their state on the nodes. These services will treat each request as an independent one and will assume all the information required to process the request is contained with-in it. State is maintained in external data-stores like SQL Server, CosmoDB or Redis cache etc.,

Stateful services on the other hand maintain their state on the same cluster. Service Fabric provides data-structures that can replicate state through all the nodes in a cluster. It provides APIs for storing, retrieving and updating data structures. Any time an update is made to a data structure, it’s automatically replicated to all the nodes in the cluster and made available to other instances of the service running on other nodes in the cluster. Since the data and the services are co-located, it can significantly reduce the latency in processing the data.

Building Microservices on Azure

With Azure

Microservices have become a popular architectural style for building cloud applications that are resilient, highly scalable, and able to evolve quickly. In this post, we explore how to build and run a microservices architecture on Azure, using Kubernetes as a container orchestrator. Future articles will include Service Fabric.
The following common open source technologies are used:

1- Azure Container Service (Kubernetes) to run front-end and back-end services.
2- Azure Functions to run event driven services.
3- Linkerd to manage inter-service communication.
4- Prometheus to monitor system/application metrics.
5- Fluentd and Elasticsearch to monitor application logs.
6- Cosmos DB, Azure Data Lake Store, and Azure Redis Cache to store different types of data.

Azure Service Fabric is a Platform as a Service (PaaS) offering from Microsoft. Azure SQL Database, Azure DocumentDB, Azure IoT, Cortana, Power BI, Microsoft Intune, Event Hubs and Skype for Business are some of the products from Microsoft that leverage Service Fabric. Service Fabric provides the infrastructure to run massive scale, reliable, stateless or stateful services. It provides end-to-end application lifecycle management and provides container and process orchestration services and health monitoring.

Diagnostics Logging for Azure Logic App

For richer debugging with runtime details and events, you can set up diagnostics logging with Azure Log Analytics. Log Analytics is a service in Azure that monitors your cloud and on-premises environments to help you maintain their availability and performance.

1-In the Azure portal, find and select your logic app.

2- On the logic app blade menu, under Monitoring, choose Diagnostics > Diagnostic Settings.

3- Under Diagnostics settings, choose On.

4- Now select the Log Analytics workspace and event category for logging as shown:

  • Select Send to Log Analytics.
  • Under Log Analytics, choose Configure.
  • Under OMS Workspaces, select the Log Analytics workspace to use for logging.
  • Under Log, select the WorkflowRuntime category.
  • Choose the metric interval.
  • When you’re done, choose Save.

 

Monitor Azure Logic Apps

After you create and run a logic app, you can check its runs history, trigger history, status, and performance. For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app. That way, you can find and view events, like trigger events, run events, and action events. To get notifications about failures or other possible problems, set up alerts. For example, you can create an alert that detects “when more than five runs fail in an hour.” You can also set up monitoring, tracking, and logging programmatically by using Azure Diagnostics event settings and properties.

1- To find your logic app in the Azure portal, on the main Azure menu, choose All services. In the search box, type “logic apps”, and choose Logic apps.

2- Select your logic app, then choose Overview.

  • Runs history shows all the runs for your logic app.
  • Trigger History shows all the trigger activity for your logic app

3- To view the steps from a specific run, under Runs history, select that run.

4- To get more details about the run, choose Run Details. This information summarizes the steps, status, inputs, and outputs for the run.

5- To get details about a specific step, choose that step. You can now review details like inputs, outputs, and any errors that happened for that step.

6- To get details about a specific trigger event, go back to the Overview pane. Under Trigger history, select the trigger event. You can now review details like inputs and outputs.

Azure Logic App Common Scenarios

Azure Logic Apps helps you orchestrate and integrate different services by providing 100+ ready-to-use connectors, ranging from on-premises SQL Server or SAP to Microsoft Cognitive Services. The Logic Apps service is “serverless”, so you don’t have to worry about scale or instances. All you have to do is define the workflow with a trigger and the actions that the workflow performs. The underlying platform handles scale, availability, and performance. Logic Apps is especially useful for use cases and scenarios where you need to coordinate multiple actions across multiple systems.

Azure Logic App Common Scenarios

Every logic app starts with a trigger, and only one trigger, which starts your logic app workflow and passes in any data as part of that trigger. Some connectors provide triggers, which come in these types:

Polling Triggers: Regularly checks a service endpoint for new data. When new data exists, the trigger creates and runs a new workflow instance with the data as input.

Push Triggers: Listens for data at a service endpoint and waits until a specific event happens. When the event happens, the trigger fires immediately, creating and running a new workflow instance that uses any available data as input.

 

Practical scenarios for polling:

Schedule – Recurrence trigger lets you set the start date and time plus the recurrence for firing your logic app. For example, you can select the days of the week and times of day for triggering your logic app.

The “When an email is received” trigger lets your logic app check for new email from any mail provider that’s supported by Logic Apps, for example, Office 365 Outlook, Gmail, Outlook.com, and so on.

The HTTP trigger lets your logic app check a specified service endpoint by communicating over HTTP.

Practical scenarios for Pushing:

The Request / Response – Request trigger lets your logic app receive HTTP requests and respond in real time to events in some way.

The HTTP Webhook trigger subscribes to a service endpoint by registering a callback URL with that service. That way, the service can just notify the trigger when the specified event happens, so that the trigger doesn’t need to poll the service.

After receiving a notification about new data or an event, the trigger fires, creates a new logic app workflow instance, and runs the actions in the workflow. You can access any data from the trigger throughout the workflow. 

Azure Threat Detection

Azure Active Directory Identity Protection

Security is a top concern when managing databases, and it has always been a priority for Azure SQL Database. Your databases can be tightly secured to help satisfy most regulatory or security requirements, including HIPAA, ISO 27001/27002, and PCI DSS Level 1, among others. A current list of security compliance certifications is available at the Microsoft Trust Center site. You also can choose to place your databases in specific Azure datacenters based on regulatory requirements.

How you can learn to enable Azure Active Directory Identity Protection is a feature of the Azure AD Premium P2 edition that provides you an overview of the risk events and potential vulnerabilities affecting your organization’s identities. Microsoft has been securing cloud-based identities for over a decade, and with Azure AD Identity Protection, Microsoft is making these same protection systems available to enterprise customers. Identity Protection uses existing Azure AD’s anomaly detection capabilities available through Azure AD’s Anomalous Activity Reports, and introduces new risk event types that can detect real time anomalies.

Identity Protection uses adaptive machine learning algorithms and heuristics to detect anomalies and risk events that may indicate that an identity has been compromised. Using this data, Identity Protection generates reports and alerts that enable you to investigate these risk events and take appropriate remediation or mitigation action.

 

Azure Database Security Best Practices

Security is a top concern when managing databases, and it has always been a priority for Azure SQL Database. Your databases can be tightly secured to help satisfy most regulatory or security requirements, including HIPAA, ISO 27001/27002, and PCI DSS Level 1, among others. A current list of security compliance certifications is available at the Microsoft Trust Center site. You also can choose to place your databases in specific Azure datacenters based on regulatory requirements.

In this article, we will discuss a collection of Azure database security best practices. These best practices are derived from our experience with Azure database security and the experiences of customers like yourself.

 

Azure database security best practices are:

  • Use firewall rules to restrict database access
  • Enable database authentication
  • Protect your data using encryption
  • Protect data in transit
  • Enable database auditing
  • Enable database threat detection

Circuit Breaker Design Pattern

Handle faults that might take a variable amount of time to recover from, when connecting to a remote service or resource. This can improve the stability and resiliency of an application.

Challenge

In a distributed environment, calls to remote resources and services can fail due to transient faults, such as slow network connections, timeouts, or the resources being overcommitted or temporarily unavailable. These faults typically correct themselves after a short period of time, and a robust cloud application should be prepared to handle them by using a strategy such as the Retry Design Pattern.

However, there can also be situations where faults are due to unanticipated events, and that might take much longer to fix. These faults can range in severity from a partial loss of connectivity to the complete failure of a service. In these situations it might be pointless for an application to continually retry an operation that is unlikely to succeed, and instead the application should quickly accept that the operation has failed and handle this failure accordingly. Additionally, if a service is very busy, failure in one part of the system might lead to cascading failures. 

Solution

The Circuit Breaker pattern can prevent an application from repeatedly trying to execute an operation that’s likely to fail. Allowing it to continue without waiting for the fault to be fixed or wasting CPU cycles while it determines that the fault is long lasting. The Circuit Breaker pattern also enables an application to detect whether the fault has been resolved. If the problem appears to have been fixed, the application can try to invoke the operation.

The purpose of the Circuit Breaker pattern is different than the Retry pattern. The Retry pattern enables an application to retry an operation in the expectation that it’ll succeed. The Circuit Breaker pattern prevents an application from performing an operation that is likely to fail. An application can combine these two patterns by using the Retry pattern to invoke an operation through a circuit breaker. However, the retry logic should be sensitive to any exceptions returned by the circuit breaker and abandon retry attempts if the circuit breaker indicates that a fault is not transient.

 

Creating Azure Logic App with Visual Studio

I am going to use Visual Studio to create azure logic app. 

Launch Visual Studio and select File -> New Project -> Cloud -> Resource Group

Give it a name and then you’ll need to choose a template. Scroll down until you see Logic App.

Once everything spins up, you’ll notice you have the following file structure in Visual Studio.

  • Deploy-AzureResourceGroup.ps1 – Is a PowerShell deployment script for the Logic App
  • LogicApp.json – This is where your main logic for your Logic App Lives
  • LogicApp.parameters.json – The parameters file that you’ll mostly want to leave alone

If you click on the LogicApp.json you’ll see the code and a JSON Outline in Visual Studio and you could begin hand coding your app. Go ahead and go to Tools and Extensions and search for Logic Apps and press Download.

A VSIX installer will appear after you close out of Visual Studio and just follow the steps to install it. Now you can right click your LogicApp.json and have the ability to open it with the Designer.

 

Azure Logic App with Visual Studio

Fire up Visual Studio 2017 Logic App project. Right click on the name of your project and select Deploy and then either New or an existing resource group. It will prompt you to login, so do so now. 

If there are any fields that you missed, then it will prompt you to enter them now. In my case, I had not set the name and it prompted me to do so. Now you’ll see in the output window that it calls the PowerShell script to deploy the resources for your Logic App. Once it finishes deploying, log into the Azure Portal to see your new resource.

Copyright © All Rights Reserved - C# Learners