I just took a MSDart to the head and I don’t feel so well right now.

One of clients went to run some of our applications on their Windows Server 2003 SP1 box and it wasn’t very pretty. They got the error message:

“The procedure entry point DefWindowProcI could not be located in the dynamic link library msdart.dll”

Not a very helpful message. It happens when we access an ADO object within our code. We couldn’t duplicate the problem on our W2K3 SP1 boxes. Some searches through Google seem to point the figure at a corrupted MDAC stack with SP1. When our client rolled SP1 back, the problem went away. Microsoft seems to be aware of a similiar problem as they have a KB article, 889114, that describes a similiar issue in Msdart.dll, but not with W2K3. I have also seen references to another article, 892500, but that one refers to DCOM permissions. I’m not sure if that one is relevant.

The question now is how to resolve this issue. How many ways can SP1 be installed on Windows Server 2003? Our boxes got SP1 through Windows Update and they have the right version of Msdat.dll.

Leave that thread priority alone

During initial e-Link web service development, I played around with lowering the priority of a background processing thread.  It didn’t need to run in real time and having the service handling client requests was more important.  For a background housekeeping thread, I lowered it’s priority to BelowNormal.

Well that didn’t work quite the way I expected it.  That thread never ran, or it ran so seldom that it was effectively useless.  So I removed the calls to lowering the thread priority and life was good again.  I chalked it up to one of those thread things not to touch and moved on to other tasks.

Today, I saw a posting on the Coding Horror that explained why you should never mess with thread prioirities.  If you have a thread running at a lower priority and it enters a sleep state, it may never wake up if another thread with a higher priority continues to run.  That’s an oversimplification, the actual details are described better here by Joe Duffy.

The moral of today’s story is “Set up your threads at normal priority and let the operating system deal with scheduling them

 

Stream reading in C#

I was banging my head against the wall with an odd stream reading problem.  I was making a web service call as straight http, no SOAP, when I hit a snag reading the response back.  I was making the request with a HttpWebRequest object and getting the HttpWebResponse response back by calling the HttpWebResponse GetResponse() method.  From the response object, I was using GetResponseStream() to get at the content.  The data coming back was of variable size.  You would get a fixed size header block, plus a number of fixed sized data entries.  The header block had a field to say how many data blocks there would be.

Naively, I thought I could just use a BinaryReader on the data stream and read x number of bytes in for the header block.  The I would parse that header to the get number of data blocks and then call Read() for that number of data blocks.  Let’s say that the header block was 64 bytes in size and the data blocks were 32 bytes.  I had logic like the following:

HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();

Stream stream = resp.GetResponseStream();
BinaryReader br = new BinaryReader(stream);

byte[] buff = new byte[Marshal.SizeOf(typeof(MyHeader))];

c = br.Read(buff, 0, 64);

GCHandle handle = GCHandle.Alloc(buff, GCHandleType.Pinned);
MyHeader header = (MyHeader)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(MyHeader));
handle.Free();

LogEntry MyLogEntry;

for (int i=0; i < MyHeader.EntryCount; i++)
{
  buff = new byte[Marshal.SizeOf(typeof(LogEntry))];
  c = br.Read(buff, 0, 32);
  if (c == 32)
  {
    handle = GCHandle.Alloc(buff, GCHandleType.Pinned);
    LogEntry = (LogEntry)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(LogEntry));
    handle.Free();
  }

}

The problem was that c was sometimes less than 32.  My bytes were disappearing.  I did some quick sanity check code like this:

c = br.Read(buff, 0, 8192);
TotalBytes = c;

while (c > 0)
{
  w.Write(buff, 0, c);
  c = br.Read(buff, 0, 8192);
  TotalBytes += c;
}

When I ran that, TotalBytes had the expected number.  What was I missing?  A little bit of googling found this bit of extremely helpful information from a guy named Jon.  I was reading while data was still coming into the stream.  The Read method is going to return before all of the data has been written to the source stream,  I had to read the stream into a holding array, by reading it as chunks, until there were no more bytes.  Then I could read the data from the array.  This was so obvious, I can’t believe I missed it.  The ReadFully() method that Jon supplied worked quite well.

In case you were wondering about the GCHandle stuff, that was needed to marshall C style structures into C# structures.  Getting that bit of code to work right is another story….

 

Vista performance (or lack of)

I just installed the Beta 2 of Vista on one of my dev boxes.  It used to be my primary development box until I got a bright shiny new one last year.  About two weeks ago, the hard drive had a massive failure (it needs to be defraggled at this point) and I needed to rebuild the box.  Since we had the Vista DVD, I figured why not.  We set it up as a dual boot, with XP as the “other” OS.

After a couple of days of use, I’ve come the this conclusion.  Vista is a pig.  It’s slow to boot and slow to run.  It’s running on an older box, P4 1.7ghz with 1GB of RAM, but that box is fast enough to run XP without any issues.  It’s slow enough that I am not going to use it as a day to day OS.  I’ll run XP as the primary OS and I’ll manually boot into Vista when needed.  The performance issues come with the beta tag, that’s all and good.  I just can’t use it.  It felt like OS/2 on a 386.

The fun part was trying to figure out how to get XP back as the default OS.  The new Vista Boot Loader is a strange and wonderful beast, but it’s not your father’s boot.ini file.  With boot.ini, it’s a trivial process to set the default OS.  Vista requires you to use a new command line tool named bcdedit.exe.  With bcdedit, you can specify the default OS, by using the /default parameter and the GUID of the OS to run.

The GUID of the OS?  Where the frack do I get the GUID of the OS? If you run BCDEDIT /enum all, you get a listing of everything BCDEDIT knows how to load and the includes the GUID.  Except for my XP, which didn’t get one.  Apparently that’s a magic number, if you run bcdedit /default {466f5a88-0af2-4f76-9038-095b170dc21c}, the legacy OS becomes the default.

Since I’m using the Vista Boot Loader, I’ll need to remember to restore XP’s boot loader before I rip out this Vista Beta for next one.  In the Vista section of Tech-Recipes, they have helpful information on how to do that.  What you need to do is the following:

  1. Reboot using the XP CD-ROM
  2. Start the Recovery Console
  3. Run FixBoot
  4. Run fixmbr to reset the Master Boot Record
  5. Exit the Recovery Console
  6. Reboot

Lifehacker has some tips on how to setup the dual boot here.

Back from vacation

After a couple of weeks of vacation, I came back to the office to find my old development book DOA.  It was worse than a Blue Screen, it was a black screen with Read/Write error message.  My C: partition was filled with CRC errors.  I could copy some files off it, but most were toast.  A second parition on the same drive reported no errors, so I don’t think it was a drive failure.  We did have a brown out while I was away, and that box only had a surge protector to save it.  Which it didn’t.

I was planning on on installing the Vista Beta 2 in a VMware session on my main development box, but I think it will go on the old box.  I have to reinstall the OS anyways, why not go with the beta.  It’s not like I have anything to lose at this point.

This post and the one before it was created with Windows Live Writer (Beta).  It’s a pretty cool blog entry tool from Microsoft.  What is really nice about it is that it reads your blog setup and you can edit the posts in the look and feel of your actual blog.  It just works.  It didn’t display the map below, but it showed where it would be.

Another cool feature is that you can edit previous posted entries and it replaces the old post with the new post.  This is my 3rd attempt on this post.

We spent a week down on the Outer Banks in North Carolina.  We have been going on and off down there for the last ten years and I like it alot.  This year, we stayed in Pine Island, on the southern end of Corolla.  I played a bit with Wikimapia and got this map of where we stayed:

 

The house in the lower left corner is where we stayed.  I tell you the name, but I want to rent it next year…

Dates are not numbers

One of the other developers that I work with had a question about inserting some date values into a SQL Server database.  The code in question is doing a batch insert and it was implemented as a series of INSERT statements and they get executed in large batches.  He was having some difficulty in getting the right values for the dates.  He was formatting the INSERT statement with the datetime values being formatted as numeric values.  The end result was that the dates were off by two days.  It was easy to fix and is yet another example of a leaky abstraction can bite you in the ass.

Dates are not numbers.  You have to consider them to be an intrinsic date type, even if the environment handles them internally as floating point numbers.  It’s easy to get into the habit of adding 1.0 to datetime variable to increment the time by one day. SQL Server will happily let you do so, and so do many programming languages.  Oddly enough, Sybase Adaptive Server Anywhere wont let you treat datetime values as numbers, they force you to do it the right way.

The problem is how each environment anchors that numeric value to the actual calendar.  What day is day 0?   It depends on what created that value.  For SQL Server, the number 0 corresponds to 1900-01-01 00:00:00.  In the Delphi programming environment that we do a lot of work in, 0 works out to be 1899-12-30 00:00 (a Saturday for those keeping score).  If you pass in a datetime as numeric, when you query it back out of SQL Server, it’s going to two days ahead of from your original date.

The 1899 date was define by Microsoft when they defined their OLE Automation data types.  The .NET runtime uses 0001-01-01 as it’s starting point.  Those are not the only ways datetime can be encoded.  Raymond Chen did a decent round up on his blog.

The way we usually pass in datetime variables in non-parameterized INSERT statements is to encode the datetime variable as a string in the standard SQL-92 format (yyyy-mm-dd hh:mm:ss).

Hidden gotcha in FreeAndNil()

Time to go memory leak hunting in my service. I’m using AutomatedQA‘s AQTime 4, a really cool tool. I’ve used it’s profiling features in the past, but not the memory leak detection. Since Delphi frees up your allocated memory when you exit the app and/or service, it’s too easy to get sloppy and not free up singleton types of objects. Well, that makes it harder to find actual memory leaks as AQTime is going to flag everything that wasn’t explicitly freed up as a leak. And that will lower the s/n ratio to make the too to hard to use.

So I’m pounding through the code and making sure that everything gets created, gets freed. Great fun, I recommend it for the entire family. I’m starting the service (actually the app version of service, but that’s another posting), then exiting it after it initiatizes. That way I can clear out all of the obvious suspects and then turn my attention to the serious memory leaks.

So I’m in the middle of doing this, when one of the objects that I am now explicitly freeing is now blowing up when I free it. And not in a good way. This object, let’s call him Fredo (not really the name), owns a few accessory objects (call them Phil and Reuben). In Fredo’s destructor, Fredo is destroying Phil & Reuben. In Phil’s destructor, Phil references another object belonging to Fredo and blows up because Fredo has gone fishing and doesn’t exist anymore.

It took a while to figure out what was going on. You see Fredo wasn’t actually fishing, Fredo was still around. Phil was accessing Fredo through a global variable (bad legacy code) because Fredo was a singleton. The variable that reference Fredo had been set to nil, even though Fredo was still in existence.

It took a while, but I figured where and how I had broken Fredo. The code that I had added to destroy Fredo looked like this:

FreeAndNil(Fredo);

The FreeAndNil() procedure was added back around Delphi 3 or so. You pass in an object reference, it free’s that object and sets the reference to nil. Horse and buggy thinking for the managed code set, but useful in non-managed versions of Delphi. The problem was that FreeAndNil doesn’t exactly work that way. Let’s take a quick peek at that code:


procedure FreeAndNil(var Obj);
var
Temp: TObject;
begin
Temp := TObject(Obj);
Pointer(Obj) := nil;
Temp.Free;
end;

It’s setting the variable to nil before it free’s it. It’s not how it’s documented and it caused my code to fail. There’s nothing wrong with how FreeAndNil is coded, by setting the variable to nil first, other objects can check to see if it still exists and not try to access that object while it’s being destroyed. I just would preferred that the documentation more accurately described the actual functionality.