Microsoft has made it obvious that they are taking OData protocol very seriously by integrating it into SharePoint, 

Generally speaking, OData extends AtomPub. AtomPub, as specified in [RFC5023] is appropriate for use in Web services which need a uniform, flexible, general purpose interface for exposing create retrieve update delete (CRUD) operations on a data model to clients. It is less suited to Web services that are primarily method-oriented or in which data operations are constrained to certain prescribed patterns.

Let me paraphrase that.  If all your service is going to do for your client is “CRUD” on generic data then OData is appropriate.  As long as everyone keeps this in mind going forward we should not run into too much trouble.  However, there is a problem with this statement.  REST is not really appropriate for doing CRUD.  

ODBC allows clients to initiate transactions across multiple requests. REST does not allow this as it would violate the stateless constraint.  REST does not need this because it is intended to address a completely different layer of the application architecture than ODBC.  REST provides a way to deliver Domain services.  I.e.  If you maintain weather data, REST provides you a easy way to expose “Today’s Weather”, “Last Week’s weather for Detroit”, “Average Rainfall in Orlando for the month of June”.  ODBC is aimed at the layer that exposes the data points for a specific place at a specific date and time. 

ODBC exposes dumb data, REST exposes intelligently presented information.

In an ODBC application it is the client that does something intelligent with the data before presenting it to the user.  In a REST application, usually the client simply makes the “intelligent information” pretty.  REST and ODBC are not comparable.

So is OData useful?  Absolutely it is useful to people who want to manipulate generic information, like for example SharePoint lists, or data to feed into PowerPivot or Excel.  If you have a need to expose a generic data store to a client that will do graphing, statistical analysis, or some kind of visualization like rendering Mars Rover data then it could be very useful. 

However, if you want to provide a service that delivers intelligent information that is specific to a particular domain then OData is not appropriate.

Beyond my fear of developers attempting to use OData for unintended purposes there are few other things that I think should be fixed in the OData spec. 

The Atom Entry content element should not use application/xml as the media type.  The content contains XML that is specifically related to the Entity Data Model and should be identified as such.  A media type such as application/EDM-Instance+xml may be sufficient.  What would be even better is if that content element contained a link to the CSDL file that defines the EntityType and that is currently accessed by constructing an URI with [Service]/$metadata.  

Client side URI construction is really nasty habit to get into.  I think for the most part, MS can get away with the construction of query parameters like $skip, $top, and $orderby, but to actually construct the path segments of a URI is just going lead to client-server coupling that will hurt in the future.



Roadmap of AngularJS


Some of the features that Angular is looking to roll out in upcoming releases include:

Typescript compatibility

The Typescript team has been working on a number of improvements including:

Creating smarter compilers which handle errors better and give better error messages.
Implementing strictNullChecks to provide extra type safety
This means the Angular compiler (ngc) will be faster since they will be taking advantage of the optimizations of Typescript

Backwards compatibility with Angular v2

It will be able to successfully use interfaces and data from applications made with Angular v2

Better Angular compiler errors

The compiler will be much smarter in terms of error handling


The ngc will be faster in terms of runtime speed and parse time. It will also be smaller

They are more productive this way and can get more stuff done
out in upcoming releases include:



Entity Framework Code First

The code first approach, part of the Entity Framework 4.1, was the last workflow Microsoft introduced. It lets you transform your coded classes into a database application, with no visual model used. Of the three workflows, this approach offers the most control over the final appearance of the application code and the resulting database. However, it’s also the most work. And it presents the biggest obstacles to communicating well with non-developers during the initial design process.

With the code first workflow, you also need to write glue code in the form of mapping and, optionally, database configuration code. However, even in this respect, the workflow provides developers with significant advantages in flexibility. You know precisely what is going on with the underlying code at all times, which is a huge advantage when working with enterprise systems. The cost of this knowledge is equally huge; it takes considerably longer to develop the application.

A code first workflow is the only realistic solution when an organization decides on a code focus approach for an existing database. However, in this case, the developer must do the reverse engineering of the database and create classes that reflect the database design as it exists. Microsoft does provide some assistance to the developer to perform this task in the form of the Entity Framework Power Tools, but you should expect to still end up tweaking the code to precisely match the application requirements and the underlying database.


AngularJS with or without ASP.NET MVC

If you’re building a single page application (SPA), then you probably don’t need the “MVC” in ASP.NET MVC. Views, especially dynamic views, are likely delivered/manipulated client-side. Angular handles that just fine.

But maybe you don’t want a complete SPA. Then what? Imagine instead 10 pages, but 10 pages that are very dynamic. After a user logs on, there’s a little user’s info up in the right-hand corner. It just shows a few things like the user’s “points”. You cache it  so they can be easily retrieved. Now, you can go two ways with this. If you’re a client-side MVC purist, you just fetch the badge data after the initial HTML payload is delivered, just like all the other data. But maybe you’re not a purist. Maybe you’re the opposite of a purist. So, instead of delivering the initial HTML, delivering some JavaScript that will post back to your server, post via JavaScript to grab user’s info data, and then ultimately merge that data into a view via client-side MVC, you simply decide to merge the data already in your cache into a view on the server and then deliver that as your initial HTML. After your initial HTML is delivered, you proceed with your typical client-side MVC code.

MVC on the server and on the client is just a convenient way to organize code. The more you do after that initial HTML is delivered, the less you need server-side MVC and no matter how you deploy Angular, you’re going to need a way to deliver that initial HTML, the templates, and most importantly the data. You can make the initial HTML and external Angular templates the result of an MVC action, but better yet, you can use .NET’S Web API to deliver the data. 


Copyright © All Rights Reserved - C# Learners