Saturday, May 29, 2010

Importance of Design Patterns

I’m not a computer scientist. I’m also not one of the many über programmers that create and analyze software frameworks and techniques. I simply design and develop software that attempts to meet my customer’s needs. To that end I’m always looking for the best tools available to get the job done.

Jeremy Miller states the importance of design patterns well.

I know many people blow off design patterns as ivory tower twaddle and silly jargon, but I think they’re very important in regards to designing user interface code. Design patterns give us a common vocabulary that we can use in design discussions. A study of patterns opens up the accumulated wisdom of developers who have come before us.

You don’t need to be a rocket scientist to understand design patterns. Most are just common sense. Complex patterns are designed to solve complex problems. Design patterns should be thought of as a tool that you use just like any other. Don’t let the ‘ivory tower twaddle’ scare you away.

I think most people would agree that one of the key components to creating a successful software product is quality. I’ve developed .NET applications in the past and have experienced the difficulty of testing and maintaining the functionality of WinForm forms and components when they are created with the default Visual Studio tools.

If you’re not careful, here’s what you end up with.

Monday, April 5, 2010

No device or emulator visible in 2010 Express for Windows Phone CTP

I was using 2010 Express for Windows Phone. I was trying to debug the application but I get an error "Exception from HRESULT: 0x89721800". I was pulling my hairs out! Thanks to MSDN, here is the solution...

Run "VS 2010 Express for Windows Phone" as administrator. Check out Windows Phone Developer Tools release notes - section "Installation" #6:

"6. When you install the tools as an administrator and then you try to run the tools from a normal user account, deployment to the emulator fails. Workaround: keep running the tools as an administrator or change the privileges of the c:\programdata\microsoft\phone tools\corecon\10.0\addons\ImageConfig.xsl file so that all accounts can read it (i.e. everyone)."

http://download.microsoft.com/download/D/9/2/D926FB38-BB43-4D87-AE5A-1A3391279FAC/ReleaseNotes.htm#tag_installation

Sunday, February 21, 2010

Mixed Mode Authentication for SQL Server 2005 Express Edition

For SQL Server 2005 Express Edition, there is not GUI tool available to configure the server. You need to go it manually. The first step is to change the login-mode.

Open registry editor (launch application %WINDIR%\regedit.exe) and go to HKLM\Software\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer in the tree on the left.

On the right, look for an entry named LoginMode. The default value, when installed is 1. Update it to 2. The next step is to restart the service.

Launch your Service Manager (Start -> Run -> Type services.msc) and look for a service named MSSQL Server (SQLEXPRESS). Restart the service.

Hey! We are not done yet... at least practically. We need to add a user with administrative priviledges so that the database can be accessed from ASP.Net.
On the command prompt, login to SQL Server command prompt using the osql utility. SQL Server 2005 Express Edition is installed with the instance name SQLEXPRESS. Use the following command to login:

osql -E -S .\SQLEXPRESS

One the SQL-command prompt, execute the following?

1> exec sp_addlogin 'username', 'password'
2> go
1> exec sp_addsrvrolemember 'username', 'sysadmin'
2> go
1> quit

Replace the username and password but not forget the quotes. To verify, try login using the following on the command prompt:

osql -S .\SQLExpress -U username

Provide the password when asked for and you should be through!

Wednesday, December 2, 2009

Sending messages to remote private MSMQ

1. When working with remote queues, the queue name in the format machinename\private$\queuename doesn't work. This results in an "invalid queue path" error.

2. The queue name has to be mentioned as "FormatName:Direct=OS:machinename\private$\queuename". This is necessary since the queue access is internally done using the format name syntax only. The other friendly representation is converted to the FormatName and then used. When working with remote queues, unless there is an AD to resolve the queue name, the friendly name won't work. Check out documentation for details.

For Eg.

MessageQueue queue = new MessageQueue(@"FormatName:Direct=OS:machinename\private$\queuename");
queue.Send("hello world");

3. Further to previous point, note that FormatName is case sensitive. If you mention the earlier string as "FORMATNAME:Direct=OS:machinename\private$\queuename", it won't work. Surprisingly, there is no error thrown in this case. "FormatName" part of the string seems to be the only case sensitive part. Others can appear in different case. For Eg., You can write "DIRECT".

4. In case you want to use the machine's IP address the syntax will be "FormatName:Direct=TCP:ipaddress\private$\queuename".

For Eg.

MessageQueue queue = new MessageQueue
(@"FormatName:Direct=TCP:121.0.0.1\private$\queue");
queue.Send("hello world");

5. The transactional properties of the queue instance you create in code should match with that of the queue you are trying to send the message to. So in the earlier examples, I was sending message to a non-transactional queue. To send to a transactional queue, the code would be

MessageQueue queue = new MessageQueue
(@"FormatName:Direct=OS:machinename\private$\queuename");
queue.Send("hello world", MessageQueueTransactionType.Single);

If the transactional properties don't match, the message will not be delivered. The surprising part is again, I didn't get any error, and the message just disappeared

6. Finally, when you send messages to remote queue, a temporary outgoing queue is created on your own machine. This is used in case the remote queue is unavailable. If you go to the computer Management console (compmgmt.msc), and expand the Services and Applications / Message Queuing / Outgoing Queues, you would see these queues. The right side of the console should show the details including the state (connected or not) and the IP address(es) for the next hop(s).

Monday, November 9, 2009

Using DebuggerStepThrough Attribute

When debugging code, one of the annoying things is to step into an one-line method or property. Assume that you have the following property:

private string word;
public string Word {
get { return word; }
set { word = value; } }

And you have a code that uses that property when calling a method:

DoSomething(obj.Word);

When you debug that line, and hit F11 to step into the method, you'll step into the get section of the property, and only then move on to the method.

By placing System.Diagnostics.DebuggerStepThrough attribute above get and set sections of the property you instruct the debugger to step through that property and not into it:

public string Word {
[System.Diagnostics.DebuggerStepThrough]
get { return word; }

[System.Diagnostics.DebuggerStepThrough]
set { word = value; } }

This instruction will cause the debugger not to step into method (property) as normal, but you can always place a breakpoint in that method and stop there.

Wednesday, September 9, 2009

Throw vs Throw Ex

Just for demonstrating, if you have classes in C# as follows,

using System;
namespace WindowsApplication1 {
public class Class1 {
public static void DoSomething() {
try { Class2.DoSomething(); } catch(Exception ex) { throw ex; }
}}

public class Class2 {
public static void DoSomething() {
try { Class3.DoSomething(); } catch(Exception ex) { throw ex; }
}}

public class Class3 {
public static void DoSomething() {
try { int divider=0; int number=5/divider; } catch(Exception ex) { throw ex; }
}}}

And you call,

Class1.DoSomething();

What's the difference if you rethrow the exception using plain 'throw;' or 'throw ex;'?

Answer:

If you use "throw ex;", The stack trace is something like,

System.DivideByZeroException: Attempted to divide by zero.
at WindowsApplication1.Class1.DoSomething() in C:\WindowsApplication1\main.cs:line 15
at WindowsApplication1.Form1.button1_Click(Object sender, EventArgs e) in C:\WindowsApplication1\Form1.cs:line 103

But if you use just 'throw' instead of 'throw ex' to rethrow the same exception, that is,

public class Class3 {
public static void DoSomething() {
try { int divider=0; int number=5/divider; } catch { throw ; }
}}

And of course for all rethrow statements, the stack trace is now like,

System.DivideByZeroException: Attempted to divide by zero.
at WindowsApplication1.Class3.DoSomething() in C:\WindowsApplication1\Main.cs:line 46
at WindowsApplication1.Class2.DoSomething() in C:\WindowsApplication1\Main.cs:line 30
at WindowsApplication1.Class1.DoSomething() in C:\WindowsApplication1\Main.cs:line 15
at WindowsApplication1.Form1.button1_Click(Object sender, EventArgs e) in C:\WindowsApplication1\Form1.cs:line 103

See the difference? BTW, This concerns both C# and VB. VB docs state that in "Throw expression" the expression would be required. This is not completely true, you can use just "Throw" in VB to rethrow the same exception. C# docs do tell that in "throw expression" the expression "is omitted when rethrowing the current exception object in a catch clause".

Tuesday, September 8, 2009

Layering Architecture and Namespaces

I was wondering whether do I need to have separate namespaces for each layer, such as MyComponent.DAL, MyComponent.BO, MyComponent.Service. What are the pros and cons of having/not having separate namespaces?

I see two approaches here.

Approach #1

Separate DLLs for each component containing layers in separate namespaces (UI, Service, BO, DAL) as follows,

Component1.dll with Component_1.DAL, Component_1.BO, Component_1.Service and Component_2.UI
Component2.dll with Component_2.DAL, Component_2.BO, Component_2.Service and Component_2.UI
...
...
...
Component_N.dll with Component_N.DAL, Component_N.BO, Component_N.Service and Component_N.UI

Approach #2

Separate DLLs for each layer containing all components as follows,

UI.dll
Service.dll
BO.dll
DAL.dll

In Approach #1, addition of components is easy without affecting other modules. But in Approach #2, recompilation of whole system is needed (UI.dll, Service.dll, BO.dll, DAL.dll). On the other hand, Approach #2 facilitates easy replacement of layers.

Architect's Advice

Components are usually self contained and are deployed at only one layer, say, business logic or data access or user interface, where as modules cut across all layers.

With Approach #1,

You would be able to have module specific DLLs with need of recompiling and distribution limited to that assembly only. This will improve management and distribution and is good if you are planning to provide this as one unit of functionality to end user/client. In this model, you can scale out by deploying such components on multiple machines and is usually called vertical partitioning of application. You can easily replace one module with another or add new module as new set of functionality.

In this approach if you have to modify cross cutting concerns like change in UI framework or UI standards or introducing new UI pattern or providing centralized business rules, caching, exception, logging/tracing, transaction management, data access functionality then you will have to change each and every module/component specific dlls to incorporate this change. Additionally you will have to manage the dlls of cross cutting concerns in each and every module increasing the overall module size and you will lose on ensuring flexibility and consistency of standards for cross cutting concerns across modules.

This approach is good if you are building a small size product which you want to distribute to clients who can run it by installing locally and doesn’t need high amount of resources like CPU/memory/databases to run and you can define self contained standards for all cross cutting concerns and cross cutting concerns changes from modules to modules or clients to clients where you provide this as tailored functionality. Has limitation on scaling Up. UI, business, data access in total can consume lots of memory on the machine. Scaling out will not be possible as all are tightly coupled into one assembly.

Approach #2 has consequence of recompiling but it helps you to,

1. Maintain consistent standards and provides high flexibility for cross cutting concerns across all your layers.

2. Recommended in scenario where modules (components) are usually known upfront and incremental addition of modules is not expected frequently as against modification of functionality within module.

3. You can easily scale up and scale out by tiering approach as against approach #1.

Friday, July 3, 2009

Creating SQL Job using T-SQL Statements

The following shows how to create and add scheduled SQL jobs in MS SQL Server.

EXECUTE msdb.dbo.sp_add_job
@job_name = 'Database Backup',
@enabled = 1,
@owner_login_name = 'sa'

EXECUTE msdb.dbo.sp_add_schedule
@schedule_name = 'Daily database backup',
@enabled = 1,
@freq_type = 4, -- daily
@freq_interval = 1, -- daily
@active_start_time = '180000'

EXECUTE msdb.dbo.sp_attach_schedule
@job_name = 'Database Backup',
@schedule_name = 'Daily database backup'

EXECUTE msdb.dbo.sp_add_jobserver
@job_name = 'Database Backup',
@server_name = 'ServerName'

EXECUTE msdb.dbo.sp_add_jobstep
@job_name = 'Database Backup',
@step_name = 'Backup database on daily basis',
@subsystem = 'TSQL',
@command = 'BACKUP DATABASE TestDatabase TO DISK = ''C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\BACKUP\TestDatabase.bak''

Thursday, July 2, 2009

Retrieving Client Browser's Culture in ASP.NET

I fall in a couple of situations where I needed to get the client’s culture from the server side in an ASP.NET application. I googled this and found only client side solutions, but I knew there was some way to get this information because the ASP.NET framework supports client based culture (through the UI Culture = "Auto" in the page attributes and the globalization section in the web.config). The only way I thought of getting the client culture was from the Request object.

After examining the HTTP Headers collection I found the Accept-Language header. It contains information about the user's preferred languages.

This is a sample Accept-Language header:

Accept-Language
bg-BG,en-US;q=0.7,ar-BH;q=0.3

The languages are explicitly defined in the browser and their order is determined. You are probably wondering what this q-thing means. According to the RFC 3282 (Content Language Headers) it specifies the language quality or in other words the language priority set in the client's browser. In the example above bg-BG (Bulgarian (Bulgaria) has highest priority then en-US (English (United States)) and the last preferred language is ar-BH (Arabic (Bahrain)).

The Accept-Language header lists all languages set in the browser in a comma separated list which makes it easy to extract each language.

From ASP.NET you can access this header using the Headers collection in the Request object - Request.Headers["Accept-Language"]. Then you can process it the way you like.

Also instead of using Request.Headers["Accept-Language"] you can simply use the HttpRequest.UserLanguages to get a sorted string array of client language preferences.

Wednesday, July 1, 2009

Unveiling System.MDW

Let me explain an (unintuitive) Access/JET security feature: the workgroup file. The workgroup information file, or WIF, stores your user and group information. It stores the usernames and passwords. Each workgroup information file, or .MDW file, contains a unique set of IDs that Access uses for its security encryption. In fact each user has a 'PID' which, combined with their username and the workgroup PIDs, generates an unique code that Access uses to determine your permissions. So where am I going with this?

Every Access install, for every version of Access, uses a default workgroup file that has the same workgroup PIDs, the same username ("Admin") and the same PID for that user. So if you are trying to secure a database by modifying the default workgroup file, you're already out of luck! Anyone using another computer already has the appropriate set of PIDs, by default, to walk right through your security. So this is a big gotcha.

Access has a default workgroup file named 'System.MDW'. Depending on your version of Access and your OS version, this file can be stored in a multitude of places. For me (Access 97/Win2K) it is stored in C:\WINNT\SYSTEM32\System.MDW . Older versions of Access use one MDW file for an entire computer; newer versions are more multi-user savvy and will install a separate System.MDW file for each user in the system. New NT-based operating systems use the C:\WINDOWS folder by default. Older OS's (Win2K/NT4) use the C:\WINNT folder by default. In all cases (Access 97 and newer), you can find the file by searching for "System.MDW".

Obviously then, it is not intended that you use the provided-by-default workgroup file. What then shall you do? Create a new, custom MDW file with PIDs you specify. To create a new workgroup file, you can (again, depending on Access version) find and run WRKGADM.EXE or go to Tools->Security->Workgroup Administrator. For me, the file is located at: C:\WINNT\SYSTEM32\WRKGADM.EXE

I'm not going to run through securing your new workgroup file; the Access security FAQ does an excellent job already.

Now that you have a custom workgroup file for use, how do you go about getting Access to use it?

Shortcuts (.LNK files) - The proper way to open a database using Access/JET security - Use the command-line /wrkgrp parameter to specify the workgroup you will use for your secured database. This will always involve the creation of a custom shortcut. An example of the shortcut's 'Target' line is:

"C:\Program Files\Microsoft Office\Office\MSACCESS.EXE" "C:\atemp\dev\rq_fe.mdb" /wrkgrp "C:\atemp\dev\icg.mdw"

This would open the 'rq_fe.MDB' file using my custom 'icg.mdw' workgroup file.