On this page

CLR vNext with side-by-side support
Windows Server “Dublin” technologies
Mono v2.0 is out
The Importance of Mono
TechEd 2005 Europe
WS-Addressing makes its way to W3C
SQL Server 2005: CLR Hosting – Establishing Balance
Back at The Norwegian .Net User Group
Asynchronous Messaging is Dangerous
Messaging is hot
Trust and Global Service Registries
Time Zone Headaches



The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

RSS 2.0 | Atom 1.0 | CDF

Send mail to the author(s) E-mail

Total Posts: 83
This Year: 0
This Month: 0
This Week: 0
Comments: 5

Sign In

# Saturday, October 11, 2008
CLR vNext with side-by-side support
Saturday, October 11, 2008 10:19:31 AM UTC ( Architecture )

Reading around the PDC site for some scoops into the future, I’m pleased to see one session covering how the CLR vNext will support side-by-side versioning of CLRs within the same process.

This may seem like a rather obscure requirement at first, but keep in mind we now have CLR v1.0, CLR v1.1, CLR v2.0 and the new CLR v2.0 shipped with .Net Framework 3.5 SP1. Luckily these CLRs and their libraries are largely compatible. However, over the years of .Net  the industry has written countless of components that they probably expect to be able to use for some time to come, even in-process. As our development tools and new frameworks keep pushing us up the stack to the next version of .Net, we will probably see some issues soon.

Hopefully, this feature goes beyond providing support for multiple Silverlight version within the same browser process, and enables us to use CLR 2.0 components from CLR vFuture. If this is the case, I'm looking forward to see how they will be providing interoperability, or if we’ll have to use an in-proc WCF channel for this purpose.

Maybe this may even be a hint that Microsoft is not expecting backwards compatibility between the current and future CLRs, and their libraries.

Comments [0] | | # 
# Monday, October 6, 2008
Windows Server “Dublin” technologies
Monday, October 6, 2008 4:31:02 PM UTC ( Architecture | Dublin | Indigo )

PDC is approaching rapidly and Microsoft is opening up its communication around the next wave of technologies; one thing I believe to be particularly interesting is Codename Dublin.

This technology supplements Windows with much needed application platform components to enhance the WCF and WF design experience.  Among other things it includes infrastructure services for message correlation and forwarding, content-based routing and transaction compensation.

I guess you can look at WCF and WF as frameworks and Dublin as infrastructure services around those frameworks.

The Dublin release will follow the release of .Net Framework v4.0 and Visual Studio 2010.

Comments [0] | | # 
Mono v2.0 is out
Monday, October 6, 2008 2:12:05 PM UTC ( Architecture | Mono )

It’s a great day for cross platform .Net as Mono v2.0 is released. Now fully stocked with ADO.NET 2.0 / ASP.NET 2.0 / Windows Forms 2.0 as well as a C# 3.0 compiler and LINQ support. In other words, there are also some .Net 3.5 bits in there.

It also ships with a nice collection of ADO.NET providers that are not available in the Microsoft distribution, as well as the usual non-Windows native goodies.

Interesting to see that they are also bundling the C5 Generic Collection library, indicating that this is probably an area where the base class libraries need more work, features and standardization.

Comments [0] | | # 
# Tuesday, April 10, 2007
The Importance of Mono
Tuesday, April 10, 2007 8:23:53 PM UTC ( Architecture | Mono )

I’ve just noticed a nice little article about the importance of Mono (.Net on other platforms). Mono is one of my favorite open source projects, not to mention the significance I feel it has in the .Net domain. Have a look.

Comments [0] | | # 
# Monday, July 4, 2005
TechEd 2005 Europe
Monday, July 4, 2005 6:54:22 AM UTC ( Architecture | BizTalk Server | Indigo | Talks )

It’s time for TechEd Europe again!

I’m doing a session on BizTalk Server 2004 and Indigo where I will go through some of the scenarios where these technologies can work together to offer some very interesting solutions. There will be several demos showing off the latest bits of the prototype adapter as well as a little low-level section on how the adapter was developed at the end.

If you find this topic interesting then drop by Room 3A on Thursday 7th at 18.15!

Comments [0] | | # 
# Tuesday, August 10, 2004
WS-Addressing makes its way to W3C
Tuesday, August 10, 2004 8:23:12 PM UTC ( Architecture | Indigo | Security | WSE )

WS-Addressing, a vital piece of XML Web Services infrastructure, has just been submitted to the W3C.

With WS-Security being an official OASIS standard, and WS-Addressing entering the W3C standardization process, we may soon be looking at the first stable set of advanced web services infrastructure.

These two specifications form the foundation of the Microsoft Web Services Enhancements toolkit's functionality. Perhaps we will soon say goodbye to backwards incompatibility and short support lifecycles in this particular area.

As always, it is great fun to follow the advances.

Comments [0] | | # 
# Wednesday, July 14, 2004
SQL Server 2005: CLR Hosting – Establishing Balance
Wednesday, July 14, 2004 2:30:06 PM UTC ( Architecture | SQL Server 2005 )

I’ve seen a lot of posts about the CLR Hosting support in SQL Server 2005, and quite a few of them discuss the possibility of moving business code into the SQL Server engine. I think it is time to establish some balance here, and I’m going to throw in my 2 cents.

CLR Hosting has a few obvious use cases, but it is in no way a replacement for T-SQL. If you are writing extended stored procedures then this is definitely the only logical way to go. It has a much better and safer programming model than the native one. If you are writing complex algorithms that can severely limit your result set then it is probably a good idea to put that into the server as well. T-SQL stored procedures that have massive amounts a non-dataset related code like encryption, conversions and extensive string manipulations could probably benefit from being completely or partially turned into managed CLR functions.

If you on the other hand are writing classical dataset manipulation and selection procedures, then T-SQL is a language that is highly optimized and specifically designed for just that purpose. You should keep in mind that managed stored procedures still use T-SQL to interact with the relational database engine; look at some code samples!

If you take a step back, you will see that you are making a decision about when it makes sense to utilize the database server processor over the application server processor. Clearly, application servers are a lot cheaper and usually a lot easier to scale out. At the end of the day, in any well designed distributed architecture, the database server is going to be your bottleneck. It would probably make sense to keep whatever processing you can away from that precious resource.

However, if you are using an algorithm to determine what records to return to the client, and you expect that it may severely limit the amount of records returned to the client, then it probably makes sense to put it on the database server. Returning 2GB of data to the application server, and then filtering away 98% before returning it to the client may be a massive waste of resources. You’ll have to make an informed tradeoff decision.

There are no absolute rules, but you will need to evaluate every single case for yourself. My advice is to stick with the way you’ve been writing applications with SQL Server 2000 and keep the T-SQL stored procedures the way they are. At least then you will know that whenever you utilize managed code in the SQL Server, you’ve made a conscious tradeoff rather then blindly following the ever popular anti T-SQL movement. Regarding business logic, keep it on your application tier where it has been living so happily over the last few years. Once again, if you do decide to move it to the SQL Server make sure it you’ve made a well informed decision that works with both your application and your business requirements.

Leave your defaults the way they are; it’s an evolution not a revolution.

Comments [0] | | # 
# Friday, May 30, 2003
Back at The Norwegian .Net User Group
Friday, May 30, 2003 6:21:46 PM UTC ( Architecture | Talks )

I talked about Message Oriented Architectures at the Norwegian .Net User Group (NNUG) on the 27th of May in Oslo. This was my second speaker appearance at NNUG, and this session was a very different experience from my last one. I found it a lot more challenging to talk about architecture than talking about dynamic SQL vs. stored procedures. It’s really hard to decide on what slides and bullet points to include when delivering an introduction to such a wide and exiting topic.

It was fun to share my thoughts on XML and modern messaging, loosely coupled designs and asynchronous messaging. I ended the talk with some slides on GXA and the XML message bus. After the presentation we had a very interesting discussion about some issues with asynchronous designs.

I’ve made the slide deck available here if anyone is interested.

Comments [0] | | # 
# Friday, March 28, 2003
Asynchronous Messaging is Dangerous
Friday, March 28, 2003 11:30:36 PM UTC ( Architecture )

Messaging as an application design pattern has been around in enterprise applications for decades, but recently it has been getting a lot of community press as web services are starting to roll out and service oriented architectures are climbing the preferred design ladder.

One of the most interesting aspects of a message oriented programming is the inherent ability to approach an asynchronous design. Yet, for some reason the transition from synchronous operations seem to be very hard. I suspect that it is not necessarily the added technical complexity, but rather the fear of loosing control that is the primary obstacle. Now that we are moving towards factoring our systems into services this is a situation we must learn to master.

Looking at the way business transactions are done today, and the way they have been done for ages, we can easily see that quite a few of them are asynchronous. Either it is a matter of message exchange via regular mail, fax, e-mail or some messaging product like Microsoft BizTalk Server. When for instance a user compiles a set of requests into an order with a procurement system the transaction is committed long before the order has reached the supplier, or even before the order has left the procurement system. In this particular scenario we are used to an asynchronous interaction, and it is indeed the only way.

Keeping in mind the before mentioned example it is strange that we seem to be unable to apply the same pattern within our own applications. I guess to some extent it is because we feel that we are the masters of our own system and that we have the ability and the right to perform synchronous operations, and that we should do so either to uphold consistency or to provide accurate and immediate user responses. Just because we can!

Even though a synchronous design may provide you with the comforting feeling of control, as business processes become more complex and the amount of users increase this will prove to be a troublesome preference. As we start refactoring our applications into services and begin to enjoy the dynamic nature of GXA it will be unreasonable to except a synchronous processing pattern for a variety of different reasons; some concerning processing time, load balancing, transactional boundaries, internal system restrictions and manual processing tasks just to name a few.

I guess what I am saying is that we need to learn to let go, to be able to trust individual part of our own systems as well as external systems and embrace the black box nature of services. It’s important to note that asynchronous messaging is in no way synonymous with unreliable messaging. If we are able to do this then at least the fear of loosing control may fade away, and I guess the immediate user response is a user education problem.

Comments [0] | | # 
# Wednesday, March 26, 2003
Messaging is hot
Wednesday, March 26, 2003 5:40:53 PM UTC ( Architecture )

The blog community is discussing message oriented programming, service oriented architectures, and of course SOAP and RPC. It’s great to see so many smart people seeing the potential of these emerging architectural principles!

And for what it’s worth; I’m in the messaging camp!

Comments [0] | | # 
# Monday, December 30, 2002
Trust and Global Service Registries
Monday, December 30, 2002 6:50:08 PM UTC ( Architecture )

When talking about service oriented architectures we basically talk about three fundamental pieces. There is a service consumer that queries a service registry or a service broker for a set of service providers that meet a set of supplied criteria. The service consumer then selects the preferred service based on a set of preferences, and the required operations are performed on the selected service.

The interesting thing about this is the fact that the industry has launched the vision of a global service registry where one could organize and resolve services using some sort of global taxonomy. Even though this approach might work for the most rudimentary of services what is increasingly interesting is the fact that their visionary samples tend to touch the area of B2B commerce. As most procurement managers know, one of the most important aspects of any supplier selection process is to establish a reliability level. When you find a supplier that satisfies your requirements you set up a trade agreement to ensure that the reliability level is agreed upon and maintained. You establish a trust relationship. The thing is that I fail to see how this is going to work with a global registry of supplier-like services. Where is the negotiation phase that is designed to establish a working business relationship with an implied two-way trust?

My point here is by no means to undermine the importance of a service broker or a service registry when it in fact is a hugely important part of any service oriented architecture. It will play an important role in making your application both pluggable and highly interoperable while effectively enforcing a much needed level of abstraction. But a successful registry has to be filled with a set of pre-approved services from a set of suppliers that the customer has established a healthy business relationship with. Providing such a private registry service makes perfect sense in any B2B architecture. Trying to solve the complexity of a negotiation phase in a machine readable service contract doesn’t.

Even though the industry is starting to address this issue, and somewhat shifted the focus towards local registries, I still see the oversimplified idea of a global registry in both technical writings and vision documents. Hopefully we will all accept that there is a reasonable complexity involved in streamlining current business processes.

Comments [0] | | # 
# Wednesday, December 18, 2002
Time Zone Headaches
Wednesday, December 18, 2002 8:43:54 PM UTC ( Architecture )

Globalization has never been an easy aspect of any development project. Lately I’ve been evaluating the requirements needed to serve users from different time zones, and at first sight this appeared to be a fairly manageable task. After all it’s just about storing all your dates in UTC time and then adding or subtracting the time difference to get the users local time. Or so it seemed…

First of all, there is no nice way of retrieving the local time zone from the user’s browser, short of using Passport and forcing the user to share his or hers time zone information. So, the user will have to choose the time zone he or she wants to use, and I guess the time zone support is worth the extra seconds spent during user registration.

Then, I entered the domain of the DST (daylight savings time/ summer time) monster. At first I thought it would be fairly easy to find a standard for this, or that Windows or the .NET framework would be nice enough to provide me with the necessary functionality.

The .NET framework does not, to my knowledge, support the ability to have different time zones attached to different threads, like they have with CultureInfo. But Windows 2000 does have a function that allows me to convert a given UTC date to a specific local time zone, just as long as I supply the time zone information; UTC offset and DST information. And it’s even possible to extract this information for every registered time zone from the Windows Registry. It’s not elegant, but it should work as long as I inserted the time zone conversion in the user interface layer. And I could even write my own .NET TimeZone implementation.

This could have been the end of my problems, but of course it wasn’t. As it turns out DST settings tend to vary between countries, and sometimes between regions within the same country and time zone. And to top it off, it varies from year to year. I’m not talking about fairly deterministic things like the last Sunday in March, but more like the time when Sydney had special DST settings during the Sydney 2000 Olympic Games and the fact that several countries, including Norway, decided to ignore DST for several non-consecutive years. I guess that is why Microsoft decided to releases a time zone editor for Windows. At least I can find comfort in the fact that the European Union has created a DST standard, but it doesn’t help historic dates much though.

So, I guess the only way to handle this problem is to create a time zone for each different combination of DST settings and UTC offsets, and collect all the relevant time zone information in a large table. I was hoping it wouldn’t have to come to that :(

Comments [0] | | #