Health Endpoint Monitoring Design Pattern in Azure

Health Endpoint Monitoring Design Pattern

Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. This can help to verify that applications and services are performing correctly.

Challenge

It’s a good practice, and often a business requirement, to monitor web applications and back-end services, to ensure they’re available and performing correctly. However, it’s more difficult to monitor services running in the cloud than it is to monitor on-premises services. For example, you don’t have full control of the hosting environment, and the services typically depend on other services provided by platform vendors and others.

There are many factors that affect cloud-hosted applications such as network latency, the performance and availability of the underlying compute and storage systems, and the network bandwidth between them. The service can fail entirely or partially due to any of these factors. Therefore, you must verify at regular intervals that the service is performing correctly to ensure the required level of availability, which might be part of your service level agreement (SLA).

Solution

Implement health monitoring by sending requests to an endpoint on the application. The application should perform the necessary checks, and return an indication of its status.

Scheduler Agent Supervisor Design Pattern

Scheduler Agent Supervisor Design Pattern

Coordinate a set of distributed actions as a single operation. If any of the actions fail, try to handle the failures transparently, or else undo the work that was performed, so the entire operation succeeds or fails as a whole. 

Challenge

An application performs tasks that include a number of steps, some of which might invoke remote services or access remote resources. The individual steps might be independent of each other, but they are orchestrated by the application logic that implements the task.

Whenever possible, the application should ensure that the task runs to completion and resolve any failures that might occur when accessing remote services or resources. Failures can occur for many reasons. If the application detects a more permanent fault it can’t easily recover from, it must be able to restore the system to a consistent state and ensure integrity of the entire operation.

Solution

The Scheduler Agent Supervisor pattern defines the following actors. These actors orchestrate the steps to be performed as part of the overall task.

  • The Scheduler arranges for the steps that make up the task to be executed and orchestrates their operation. These steps can be combined into a pipeline or workflow. The Scheduler is responsible for ensuring that the steps in this workflow are performed in the right order. As each step is performed, the Scheduler records the state of the workflow, such as “step not yet started,” “step running,” or “step completed.” The state information should also include an upper limit of the time allowed for the step to finish, called the complete-by time. If a step requires access to a remote service or resource, the Scheduler invokes the appropriate Agent, passing it the details of the work to be performed. The Scheduler typically communicates with an Agent using asynchronous request/response messaging. This can be implemented using queues, although other distributed messaging technologies could be used instead.

  • The Agent contains logic that encapsulates a call to a remote service, or access to a remote resource referenced by a step in a task. Each Agent typically wraps calls to a single service or resource, implementing the appropriate error handling and retry logic (subject to a timeout constraint, described later). If the steps in the workflow being run by the Scheduler use several services and resources across different steps, each step might reference a different Agent.

  • The Supervisor monitors the status of the steps in the task being performed by the Scheduler. It runs periodically (the frequency will be system specific), and examines the status of steps maintained by the Scheduler. If it detects any that have timed out or failed, it arranges for the appropriate Agent to recover the step or execute the appropriate remedial action. Note that the recovery or remedial actions are implemented by the Scheduler and Agents. The Supervisor should simply request that these actions be performed.

The Scheduler, Agent, and Supervisor are logical components and their physical implementation depends on the technology being used. For example, several logical agents might be implemented as part of a single web service.

The Scheduler maintains information about the progress of the task and the state of each step in a durable data store, called the state store. The Supervisor can use this information to help determine whether a step has failed. 

Azure Ambassador Design Pattern

Azure Ambassador Design Pattern

It creates helper services that send network requests on behalf of a consumer service or application. An ambassador service can be thought of as an out-of-process proxy that is co-located with the client.

This pattern can be useful for offloading common client connectivity tasks such as monitoring, logging, routing, security (such as TLS), and resiliency patterns in a language agnostic way. It is often used with legacy applications, or other applications that are difficult to modify, in order to extend their networking capabilities. It can also enable a specialized team to implement those features.

Challenge

Resilient cloud-based applications require features such as circuit breaking, routing, metering and monitoring, and the ability to make network-related configuration updates. It may be difficult or impossible to update legacy applications or existing code libraries to add these features, because the code is no longer maintained or can’t be easily modified by the development team. Network calls may also require substantial configuration for connection, authentication, and authorization. If these calls are used across multiple applications, built using multiple languages and frameworks, the calls must be configured for each of these instances. In addition, network and security functionality may need to be managed by a central team within your organization. With a large code base, it can be risky for that team to update application code they aren’t familiar with.

Solution

Put client frameworks and libraries into an external process that acts as a proxy between your application and external services. Deploy the proxy on the same host environment as your application to allow control over routing, resiliency, security features, and to avoid any host-related access restrictions. You can also use the ambassador pattern to standardize and extend instrumentation. The proxy can monitor performance metrics such as latency or resource usage, and this monitoring happens in the same host environment as the application.

Azure App Service

The Azure Mobile Services suite which offers push notification capability, authentication, and data storage was discontinued in December 2016. Microsoft moved sites from Mobile Services to the Azure App Service, which offers similar functionality.

There was an overlap between App Service and Mobile Services which was introduced in 2012. In the mobile backend world, Azure Mobile Services has faced competition from the Cognito service offered by public cloud market leader Amazon Web Services. Google Cloud Platform has Cloud Endpoints and Firebase. Facebook is in the process of shutting down its Parse mobile backend.

Please note that your existing Mobile Service is safe and will remain supported. However, there are advantages that the Azure App Service platform provides for your mobile app that are not available with Mobile Services:

  • Simpler, easier, and more cost effective offering for apps that include both web and mobile clients
  • New host features including Web Jobs, custom CNames, better monitoring
  • Integration with Traffic Manager
  • Connectivity to your on-premises resources and VPNs using VNet in addition to Hybrid Connections
  • Monitoring, alerting and troubleshooting for your app using AppInsights
  • Richer spectrum of the underlying compute resources and pricing
  • Built-in auto scale, load balancing, and performance monitoring.
  • Built-in staging, backup, roll-back, and testing-in-production capabilities

 

 

Azure OData Service

What is OData?

SOAP based Web Service are a medium for applications to communicate between each other and RESTful APIs released after to make these communications easier. OData stood as the next step. It stands for Open Data Protocol. The general philosophy carried by OData is standardization of the usage of RESTful API, and empowering applications to make this usage a simpler affair. 

Basically, through the OData protocol, you are able access any particular bit of data in the target system, via an HTTP URL. 

How do I expose my application data, as an OData URL?

So, Salesforce can consume data via the OData protocol. But how would you expose your back-office data as an OData service in the first place?

Where do I find test OData Services?

 

If you would like to stand out, by creating situation specific data, a personal-looking OData URL, or simply to get a feel of creating a custom OData service.

Creating an OData Service

Like I said, one of the ways to create an OData service is through the .Net WCS Data Services templates. That is what we would be using here.

Creating a Cloud Database

To create an OData service, there should be some data to read from in the first place. Let’s leverage the Microsoft Azure platform for making a hosted SQL database. 

Building an OData Service 

Let us now proceed to build an OData service that can read from this cloud data base. We are going to use the WCF Data Service template available in the Visual Studio. Ofcourse you can use earlier versions of Visual Studio, but then you might need to go through some extra steps to enable the WCF Data Services template. Described below are the steps to create the OData Service. 

Open Visual Studio, click File > New Project, and choose ASP .NET Web Application. Provide a meaningful Project Name and Solution Name. Click OK.
In the pop up that is presented, choose “Empty”, as we do not require any pre-built templates at this point.
This would make Visual Studio set up a project and solution space for you. It is good to have the Solution Explorer view enabled, for easy navigation (Cntl+Alt+L). From the solution explorer, right click on your project, and Add > New Item.
Choose ADO .NET Entity Data Model. Since we first have to generate our DB entity diagram within the context of the application we are making. Visual Studio lets us do this all using point and click.

In the choices for Model Contents, choose EF Designer from Database. Which would tell Visual Studio to intuitively design the entity model based on our DB table structure. Click Next.
At this point you are asked the details of the SQL server, to which it need to make the data connection. Click “New” button, to open the dialogue to initiate a new connection request. But now, you need to know the Server name your Azure DB. Where do you find that?
Well, log-in to the Azure Manage platform, go to the detail page of your DB, and there you would be able to find your Server name. Copy that, for use here. 
Once you have the server name, paste the values and enter the SQL authentication credentials that you specified when creating this DB. After a few minutes of wait, the available databases in the SQL server would be displayed. If you are getting an error pop up, it would be most likely because your current IP is not in the list of trusted IPs at Azure. Add your current IP to your DB’s trusted list. The database names might take a while to load, be patient. As long as you did not get an error, you are good. In the next step, choose “Yes, Include sensitive data… ” and further choose Entity model. That would give you a screen to choose what database elements need to be brought in context to this app.
The next broad task we are going to perform is to add a data service to this application. WCF Data Services enables you to create services that use the Open Data Protocol (OData) to expose and consume data over the Web or intranet. 
As you did above, in Step 3, right click on the Project and Add > New Item. In the selection pop up that is thrown, choose the WCF Data Services template from the Web category. Also be sure to give a proper name for this data service, since what you give here would appear as the last part for your OData URL.
Once you click add, Visual Studio would open up a template based DataServices class, that has neat instructions as to what to do. Notice a bunch of *TODO tags within the template. It narrates what exactly need to be filled in.
Fill in the [class name] in line 2. And remove the comment from the following line for code: “config.SetEntitySetAccessRule(“MyEntityset”, EntitySetRights.AllRead);”. And replace “MyEntityset” with “*”. This would grant read access to all data via this OData service.
That is it, you are done. Click to the browser button on the top pane, to check if everything is alright. If yes, you should be getting an XML output with a local version of your OData service.
Suffix the url with “$metadata” to get the entire meta data structure of your data base.

So now we have an OData service running from our local machine. But that might not be enough for other applications to consume. So, lets get this OData URI cloud-ized. How? Azure again.

Sign in to your Azure and via the New button, reach New > Web App > Quick create. Enter a neat looking URL, as this would constitute the first bit of your OData service URL.
Once the Web App is created, you would be taken to the web app dashboard; where you would find the option to “Download the Publisher Profile”. Publisher profile is a collection settings, that lets a third part app authenticate and publish applications to this hosted web application. Download and save the publisher profile to your computer.
Go back to Visual Studio, and right click on the Project, invoke the menu and choose “Publish”. If you don’t see the publish button enabled, it could be because you are in debug mode. There would be a “stop” button on header bar to get you out of the developer version. Post which you would be able to see the publish button.
After the web publishing is done by importing the publisher provide file that you saved a while ago. You would notice that most of the parameters regarding the web application gets filled automatically. And this is why, we choose to use the publisher profile import method. Now, click Publish.
It might take a while for the entire web application to get published onto Azure. Once done you would see a success page. Since there is no landing page or front end defined for our web application, it would be the generic Azure page.
Extend the URL by adding <yourODataServiceName.svc>, and you should be able to see the XML.

That’s it! You have just created a personalized custom OData service URL. 

Copyright © All Rights Reserved - C# Learners