What not to do on a job interview: Pressing the Self-Destruct Button

Image by Tumisu

I’ve been with my current employer for the better part of two decades and I was thinking back to the job interviews that I went on before taking this one.  There were two places that I interviewed at where I deliberately blew the interview because I realized we not compatible.  Before I continue, don’t do what I did.

The first place was a company that was not long passed it’s startup days.  They did web development and they had probably less than 20 people.  A friend of mine had started working there a few months earlier and he was still bullish on the company.  I applied and on his recommendation, I was brought in for an interview.

I met first with the owner of the company.  That part went OK, but I didn’t feel comfortable with the owner.  I couldn’t narrow it down to anything specific, but something just didn’t feel right.  It could have been his personality, it could have been my unease being back in the job market after less than a year at the current position.  I just wasn’t comfortable with him.

I then met with the director of development.  Let’s call him “Sam”.  My interview with Sam started off well, we seemed to hit off.  At that point in time, I knew nothing about web development and had been upfront with that.  They were looking for more of back end coder, so my SQL skills more than made up for the lack of all things HTML.  We talked SQL and performance analyzing and things of that nature.  The more we talked, the looser Sam became.  He started saying negative things about some of the developers on his time.  Nothing in depth,  but totally inappropriate to mention in an interview.  Actually inappropriate to mention at all.

Sam had been a C programmer and loved to write code that was more complicated than necessary.  On a white board, he had written a single line of code that was an unholy mess of functions and pointer arithmetic and array offsets.  It was his standard programming challenge for job applicants.  He asked me to parse it.  And this is more or less what I said

I would fire the person who wrote this code.  It’s an exercise to show clever you are for writing this.  By writing all of the code as a Nested Series of Functions from Hell, you eliminated readability and maintainability from the code.  And just forget about the error handling, there’s no room for it.  If any one part changes with a parameter or return type, the best that you can hope for is that it fails to compile.  At worst, it would continue to run and you would get the wrong results and then spend hours trying to figure out what had changed .

Well that was not answer that Sam was expecting.  He made a big production of going over the code, function by function, pointer by pointer.  He had to make his point, which to be fair, my remarks were pretty rude.  He tried to get me to agree with him that the code was elegant. I politely demurred and the interview was pretty much over.  To no great surprise, they did not call me back.

The next interview was with a larger company.  I was interviewing for a Java developer position.  I had taken some Java courses, but had little real world experience with the language. I was comfortable enough with Borland’s jBuilder Java IDE to talk somewhat about it.  My current job was transitioning from Delphi to Java, so it was a skill I was starting to pick up.  My current employer was big on what was then called the AS/400.  Other than writing SQL queries to an ODBC connection to an AS/400, I knew nothing about the AS/400.

This interview was the type where you spent 20 minutes at a time with a person or small group and then was passed to the next group.  They had told me to plan on 3 hours for the interview.  I met first with the Java people.  That went well.  They understood that my actual Java experience was limited, but I knew the tools they were using and I knew had to write client/server applications.  I then met with the AS/400 people.  Or rather the people would be managing the AS/400 people when they hired the AS/400 people.  They wanted me to be the first person on the team, to port their application from UNIX to the AS/400.

I explained that I was not an AS/400 expert and my level of AS/400 skill could be measured as none.  They didn’t care, they wanted an AS/400 developer and that was where they would put me if I was hired.  I said that I was looking for a Java developer position and I didn’t have the AS/400 skills they were looking for.  They said that would be OK and I could learn the AS/400 as I went along.  They then said that I could move to Java team after being on the AS/400 team for 6 months.

They were either lying to me or they had no idea of what they were talking about.  There was no way that I would accomplished anything meaningful in 6 months.  Between not knowing what their app did and how it was designed with not knowing anything meaningful about the AS/400, 6 months was too short a time period.  And from a business perspective, you are not going to spend 6 months getting a developer up to speed on a technology that no one else knows and then allow him to transfer to another team.  That made no sense.

I was then shuttled off to marketing team and sales team.  They showed me how the app works and how they sold it. They did mention how excited they were to be getting an AS/400 version of their application.  They seemed to think that I was going to be the guy or one of the guys who gave them the AS/400 app.  Either way, it was going to be a non-starter for me.

Finally, I met with the president of the company.  She swore like a sailor and kept switching topics.  At one point she started talking about a delay of some new feature from of the teams. She named each person and described where she thought that person could have had dropped the ball.  She then asked me how I would deal with the problem if I had her job.  We then spent the next few minutes talking about the situation.  I broke it down by timeline.  Was the timeline to add the feature realistic?  Were enough resources available to implement and test the feature?  Did they have a manager measuring progress against the timeline?  The usual management stuff.  It was just very odd that we were talking about a specific problem with specific people.  I ended up working with people that used to work there and they said that development delays was a constant problem.

We then got around to talking about the position.  I said that I came in for a Java position but the job was being pitched as combination AS/400 admin/developer.  And that was not my skill skill.  She said that when they discussed my resume, my current employer’s experience with the AS/400 was more important than any other skill that I had.  I thanked her for her time and finally left.  It was another opportunity where I did not expect or receive a call back.

I have gone on very few job interviews and I handled both badly.  With the first position, I should have made an attempt to parse the Code From Hell and kept my opinion to my self,  It was a programming pissing match and my comments did not move the bar forwards.  For the second one, I should have halted the interview process once I realized that our job expectations did not match up.  Even if you don’t want the job, you don’t want to blow the interview.  People move around and you could interview with some of the same people somewhere else and lose the opportunity for your dream job.  Always do your best in the interview.  If you don’t think that the job is right for you, you can always turn down the job offer.

Using console jQuery to scrape lists from Apple’s developer portal.

Scrape
I needed to grab the lists of registered devices and developers from our company’s Apple Developer portal. Unless I’m being particularly obtuse (an outcome that I never rule out), Apple does not provide any means of exporting the lists.

Apple only allows 100 devices of each type (iPhone, iPad, iWhatever) to be registered as development devices. No matter how many iOS developers that you have at your company, 100 is the limit. And if you remove a device from that list, it still counts towards that total.  Once a year, you can reset the list and carry over the devices that you still need and drop off the ones that are not needed.  To make this easier to manage, I wanted to get a list of the devices and their ids and have the developers pick the ones that they still need.

So I wanted to export that list.  And Apple doesn’t let you export that list.  You can see it on the screen and work with the items in the list, but no export.  I figured that I wasn’t the only person dealing with that limitation so I did a quick search on Stack Overflow and found this little gem.

var ids = ["Device ID"];
var names = ["Device Name"];
$("td[aria-describedby=grid-table_name]").each(function(){
    names.push($(this).html());
});
$("td[aria-describedby=grid-table_deviceNumber]").each(function(){
    ids.push($(this).html());
});

var output = "";
for (var index = 0; index < ids.length; index++) {
    output += ids[index] + "\t" + names[index] + "\n";
}
console.log(output);

To use that code, you would go to the list of devices in the browser. Then open up the developer tools for that browser. For example, in Chrome you would press F12 to open up the developer tools. Staying with the Chrome example, you would click on the Console tab in the developer tools and paste that Javascript code in and then press the Enter key. The code would execute within the domain of the page and generate a two column list of device ids and names.

To understand what that code does, you need to look at how the data is rendered on the page. The device list is stored in a HTML table, with each row looking like this

<tr id="1" tabindex="-1" role="row" class="ui-widget-content jqgrow ui-row-ltr">
    <td role="gridcell" style="text-align:center;display:none;width: 34px;" aria-describedby="grid-table_cb">
        <input role="checkbox" type="checkbox" id="jqg_grid-table_1" class="cbox" name="jqg_grid-table_1">
    </td>
    <td role="gridcell" style="" class="ui-ellipsis bold" title="iPad" aria-describedby="grid-table_name">iPad</td>
    <td role="gridcell" style="display:none;" class="ui-ellipsis" title="c" aria-describedby="grid-table_status">c</td>
    <td role="gridcell" style="" class="ui-ellipsis" title="twletpb659m0ju078namuy8xnv2j0fzt1kytanfz" aria-describedby="grid-table_deviceNumber">twletpb659m0ju078namuy8xnv2j0fzt1kytanfz</td>
</tr>

Looking at the highlighted lines 6 and 9, we can see the device name and device id as the text of table cell tag. Each cell has a aria-describedby attribute to identity the type of value being stored. We can search on the values of the attributes to locate the data that we want. Going back to the javascript, look at the following lines:

var names = ["Device Name"];
$("td[aria-describedby=grid-table_name]").each(function(){
    names.push($(this).html());
});

The first line declares a Javascript array with an initial array element of “Device Name”. The next line performs a jQuery select for all of the <td/> elements that have attribute of aria-describedby with the value grid-table_name. The next part of the statement iterates over the list of matching <td/> elements and uses the jQuery html() to get the text value of the cell and add it to the array. We then can then do the same technique to get the device id and then build a list as a string and finally dump it to the browser’s console.

I also needed to the email addresses of all of our registered developers. The email addresses were not in a table, but part of a list. Each email address is wrapped inside a section element like this

<section class="col-100 ng-scope">
  <p ng-bind="::person.fullName" class="ng-binding">First Last</p>
  <a class="smaller ng-binding" 
    ng-bind="::person.email" 
    ng-href="mailto:first.last@yourcompany.com" 
    href="mailto:first.last@yourcompany.com">
    first.last@yourcompany.com
  </a>
</section>

I just needed the text part from the <a/> element. Getting the email addresses was a simpler version of the code to get the devices. I just a jQuery select on the ng-bind attribute and matched on the value “::person.email”. That ended up being a single line of code to run in the browser’s developer console

$('a[ng-bind="::person.email"]').each(function(){
  console.log($(this).text())
  });

And that’s how you can screen scrape data from a web page that doesn’t provide any support for exporting the data.

Bonus round
The aria-describedby attribute is a commonly used accessibility element used to describe the element that the tag is part of. The “aria” part of the attribute name is an acronym for Accessible Rich Internet Applications. Among other things, it was designed to allow assisted reading devices help parse a page for users with visual difficulties. It’s a good technology to use on your web pages.

Xamarin Dev Days – Latham, NY – Dec 2nd

Looking to start doing mobile app development with Xamarin, but don’t know where to start?  Then we have some good news for you.  Xamarin Dev Days is coming to the Tech Valley.  We’ll be hosting the event on Saturday, December 2nd, at the new Latham office of Tyler Technologies.  While it’s early to announce an event that is 9 months off, it’s still good to get the word out.

Xamarin Dev Days are community driven, hands-on learning experiences geared towards beginner mobile developers to help build, test, and connect native iOS, Android, and Windows apps.  We’ll spend the morning with sessions that introduce Xamarin ecosystem.  This will include an overview of Xamarin, Xamarin.Forms, and using cloud computing through Azure with Xamarin.

There will be a hands on lab in the afternoon that will walk everyone through how to build a Xamarin.Forms app that pulls data down from an Azure hosted database.

Agenda

Time Session
9:00 AM – 9:30 AM Registration
9:30 AM – 10:10 AM Introduction to Xamarin
10:20 AM – 11:00 AM Cross Platform UI with Xamarin.Forms
11:10 AM – 11:50 AM Connected Apps with Azure
12:00 PM – 1:00 PM Lunch
1:00 PM – 4:00 PM File -> New App Workshop

What is Xamarin?  Xamarin lets you deliver native Android, iOS, Mac, and Windows applications using your existing .NET skills and code.  You can build 100% native apps from a shared code base.  If you can do it in Swift, Objective-C, or Java you can do it in C# with Xamarin.

Tickets to this event are free, but you will need to register in advance.  Visit the Latham Xamarin Dev Days page and then click the register button.

If December is too long to wait, check out the other locations on the Xamarin Dev Days home page.  If you want to host your own Dev Days event, then click here.

A Xamarin port of the usb-serial-for-android library

Back in January, I ported the excellent usb-serial-for-android library from the Java source code to Xamarin C#.  We have an Android application that needs to use an external RFID reader.  The reader is an Elatec TWN4 RFID reader and it can work as virtual comm port over USB. To use that reader, I needed a general purpose serial over USB library.  I ended taking a very good one from the open source Java community and porting it over to C#. That ported library is up on Github under the name UsbSerialForAndroid.

Out of the box, Android doesn’t come with a lot of support for serial port devices.  It’s probably not a common use case.  Starting in Android 3.1, support was added for USB Host mode to allow access to USB devices from Android apps.  There was enough of a need for serial devices that Mike Waverly wrote a very good library in Java named usb-serial-for-android.  It supports many of the common USB serial chipsets.  So I wanted to use that.

With Xamarin Android, you have basically two ways of consuming Java libraries.  You can use them directly by creating a C# to Java wrapper and bundling the .jar file with your project.  While that can work, and work very well, it can also be a bit clunky and you can hit some issues mapping the Java method calls to C#.  Another group had gone down that path.  They implemented a wrapper for the .jar file and added some helper classes.  It looked like their project was abandonware and was not using a current version of Mike’s code.  You would also have the limitation of not being to debug into that code library.

If you have the source code, you can port the code from Java to C#.  I decided to go down that route.  It took a couple of days, but I was able to port all of the Java code to C#.  It went over more or less as is.  Some changes needed to made because reflection is handled differently in C# than in Java.  There were also a bug in Xamarin’s API access code that mangled the array handling in some Java code.

In Java, ByteBuffer.wrap(someBuffer) allows for two-way updating of a Java array with a stream buffer,  A bug in Xamarin’s API mapping tool emits code that allocates a new buffer when you call Wrap.  Changes made to the ByteBuffer are not reflected in the original byte array.  This is logged in Xamarin’s Bugzilla database here and here.

In the CdcAcmSerialPort.Read() method, defined here in C# and here in Java, I needed to add a line to copy the new array back over the original array.

In the original Java (edited) code, we had this
final ByteBuffer buf = ByteBuffer.wrap(dest);
if (!request.queue(buf, dest.length)) {
throw new IOException("Error queueing request.");
}

final int nread = buf.position();
if (nread > 0) {
return nread;
}

In the C# code, I added a call to BlockCopy to overwrite the original byte array with the updated contents
ByteBuffer buf = ByteBuffer.Wrap(dest);
if (!request.Queue(buf, dest.Length))
{
throw new IOException("Error queueing request.");
}

int nread = buf.Position();
if (nread > 0)
System.Buffer.BlockCopy(buf.ToByteArray(), 0, dest, 0, dest.Length);
return nread;
}

I also replaced some integer constants with enumerated types where it made sense to do so. I also took the C# helpers from the LusoVU repository.

As much as I could do so, I followed the code structure’s of the Java library.  When changes are made to that library, I can view those changes and make the equivalent changes in the C# code.  The end result was that I ended up with all C# code and it works great.

The TWN4 has become my favorite RFID reader.  It’s very good at reading different card types and you can write custom firmware for it in C.  I used it in another project where it had to work with a custom protocol to with the host device.

TWN4 reader

And then my blog was defaced

A couple of weeks ago my blog was defaced through a security hole in WordPress. About 800,000 blogs were hit via something called the REST-API exploit. I saw something like this on the main page of my blog

Hacked message

I blurred out the identifying text and graphics.  No sense giving any credit to the ones behind the hack.  I actually support their cause, but not this kind of stuff.

At that point I had no idea what had happened.  I figured it was either someone had hacked the OS or someone had hacked WordPress.  I went in and deleted the post and then my blog stopped working.  I was too busy at the time to deal with it, so I just shut the blog down.  I was running a virtual machine up in the cloud and I had installed Linux, MySQL, and WordPress manually,  I recommend doing that at least once.  But no more than just once.  I had to manually edit a bunch of files so that my WordPress site was the default site for the machine.

I then found out that the problem was caused by a security hole in WordPress 4.7.0/4.7.1 that had since been quietly patched in 4.7.2.  My blog was not set up to automatically update WordPress, so it was one of the 800k that had been hit.

Paris Tuileries Garden Facepalm statue

I should have had automatic updates turned on

I had backups of the blog, so I knew I could get it back up and running.  I decided to take some time and start over again.  While it would be easy to just delete the posts, there reports that Remote Command Execution (RCE) attacks were being attempted through this exploit.  I don’t think that I had any plugins that would allow a RCE attack, but I decided to err on the side of caution.

I looked at some of the sites that offer WordPress hosting, but I decided to do it in a VM again.  The price is roughly the same as some of the cheaper hosting plans, but I would have full control over the site.  I would also have full responsibility for keep it up and running too, there’s never a free lunch.

Instead of installing everything myself, I used Bitnami’s one-click WordPress installer.  In the Azure marketplace, Bitnami has an installer that will install the server edition of Ubuntu 14.04 LTS “Trusty Tahr” with all of the bits to run WordPress.  The “LTS” designation is important, it stands for Long Term Support and this version will be supported until April, 2019.  It included the PhpMyAdmin tool for managing MySQL databases.  I created a new database and restored the table with the posts from my old blog.  I backed up the new blog database (just to be safe).  I tried installing all of the rows from that table into the new blog, but that broke the blog.  Something in the hacked posts was probably doing something bad.  I restored the new blog from the backup and then exported the old blog posts up to the date that it was hacked.  I restored those records and the blog was happy.

So the blog was more or less ready to go at this point.  I installed VaultPress and it immediately blocked people trying to do things to it.  It wasn’t really public yet.  It had a DNS name visible to the outside world, but not my DNS name.  I went to my DNS registrar (GoDaddy) and updated the DNS records to redirect rajapet.com from the old VM to the new one.   With the DNS updated, I was able to do something that I had meaning to do for a while: Add SSL/TLS support and enable HTTPS for the blog.

I’m not doing anything that really needs HTTPS, but the browsers are really pushing for sites to use HTTPS.  In the old days, that meant buying a SSL certificate, installing it, configuring your site to use it, etc.  The people behind Let’s Encrypt have changed that story.  It’s a free an open Certificate Authority that provides free certificates to allow anyone to enable a trusted HTTPS site.  All you need is to own your own domain (and have some level of access to the web server).  They provide the cert and the tools to install and update the certificate.

Let’s Encrypt is a free, automated, and open Certificate Authority.

It was just slightly tricky to get the Let’s Encrypt tools to work on my site.  Bitnami’s installation of Apache and WordPress are slightly different than standard installs.  Not wrong, just different enough that automated Let’s Encrypt tool didn’t complete it’s task.  The documentation on the Bitnami site is very good and walks you through the Let’s Encrypt manual steps.  I set the certificate to use rajapet.com rather than www.rajapet.com.  The “www.” is archaic and I don’t need it for this site.  With good stuff like Let’s Encrypt, there is really no excuse not to use HTTPS any more.

I edited the httpd-app.conf file that Bitnami uses in place of the .htaccess file to redirect HTTP and www.rajapet.com requests to the simpler https://rajapet.com. If you are running Bitnami’s WordPress install, it’s pretty easy to change and is more or less documented here.  In /opt/bitnami/apps/wordpress/httpd-app.conf, you’ll want to add the following lines after the line with “RewriteEngine On”:

 

    #SSL redirection
    RewriteCond %{HTTPS} !on
    RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

    # Replace www.xxxx with xxxx
    RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
    RewriteRule ^(.*)$ https://%1/$1 [R=301,L]

 

The first block takes any request that uses HTTP and replaces it with HTTPS. The second block strips out the “www.” from the start of the URL.  You can still use HTTP or “www.”, but you’ll be taken to https://rajapet.com each time.  With a 301 redirect to let search engines know that this is a permanent change of the link.  After making that change, remember to restart Apache.

So the blog is back, I only restored the posts, past comments may or not come back.  I installed the usual security plugins, but I need to install the code formatting plugins.  I picked a new theme that’s pretty basic and mobile friendly.  That will probably change, it’s kind of on the “blah” side.  At least it is as I have it setup.  I used to have an about page that had a form for entering comments.  That was a SPAM magnet and I had disabled it just before the hack attack.  If you want to get in touch with me, the best bet is through one of the social media links in the sidebar.

My book about localization with Xamarin is out

Cross-platform Localization for Native Mobile Apps with Xamarin

Cross-platform Localization for Native Mobile Apps with Xamarin

Last month Apress published my book on writing localized apps with Xamarin. It’s titled “Cross-platform Localization for Native Mobile Apps with Xamarin” and is available on Apress’s site or on Amazon

It’s not a long book, about 110 pages.  It provides the basic information you would need to localize your app for other languages and countries.  It’s written for the Xamarin developer, but the topics apply to other developers.

After discussing the basics of localization, the book covers how to get your text translated.  There is a chapter that covers the Multilingual App Toolkit (aka MAT), which is Visual Studio extension that is a workflow management tool for language resource files.

Multilingual App Toolkit

I love this tool

If you are using Xamarin.Android and Xamarin.iOS to build your apps, you can use MAT to generate Android and iOS string resource files.

Besides translating the text that is compiled into the app, I spend time discussing how to manage language resources coming from web services.  Once again, the MAT is handy for managing language resources server side.

The sample app (Spanish version)

There is a chapter that builds a Xamarin.Forms app from scratch and then localizes it for the Chinese, Spanish, and German languages.  For the German translations, I was helped out by one of my co-workers, David Krings.  David is a native German speaker with technical writing experience.

When translating language resources, a fellow co-worker is one of the best resources.  Besides knowing the language, that person will know the domain of the app.  Context is very important with language translation.

Consider a string resource of “boot” in English and you need to translate it to the Polish language.  You need to know which meaning of “boot” to translate.  In US English, it means a type of footware and it could be translated as “kalosze”.  It could also mean to start up a process, which could translate to “rozruchu”.  In UK English, it’s the part of the car where you store things, that translates more or less as “bagażnika”.  Without the context, it’s easy to get the wrong translation.

For the Chinese and Spanish translations, I used a commercial translation company called SmartCAT.  Two of their translators, Max Diaz and Максим Морковкин, did the Spanish and Chinese translations.  I like how that SmartCAT works and recommend it to anyone that needs translation services performed.  SmartCAT’s Head of Community, Vova Zakharov, arranged for the translation services and at the last too.

When you have your app translated, you will need to have done by someone with the language expertise.  Machine translation has come a long way, but it’s better to do it right.

Besides language translation, the book covers how to deal with numbers, dates, currency, and other country/regional formatting.  The nice thing about using the .NET Framework is that most of the heavy lifting is done for you.  You don’t have to worry about how to handle it with one OS and then figure it out for another OS.

The source code for the book is up on Github.  You can access it from Apress/cross-platform-localization-xamarin.  You will want to download or clone the repo to follow along with the book.

And while I’m thanking people, I wanted to give a shout out to the technical reviewers.  Craig Dunn is part of the Xamarin team that is now part of the Microsoft team. He made this book much, much better.  Cameron Lerum created the Multilingual App Toolkit, which made most of this book possible.  Between the subject expertise and technical writing skills, I could not have had any better reviewers for this book.

Quick Powershell tip for avoiding file name collisions when you have images from multiple cameras

My wife and I both have Sony mirrorless cameras. She has an Alpha 5000 and I have a NEX-6. I back up all of our images to my home dev box, and then to a QNAP NAS server. The images are stored in folders based on year and month.

The date based filing makes it somewhat easy to find the images, but when you have images from multiple cameras, you run the slight (but real) risk of having both cameras save an image with the same file name. Both camera use the standard DSCXXXXX naming scheme, where XXXXX is number that is being incremented by the camera.  It’s also a little difficult to quickly separate her images from mine.

So what I do is to use a quick Powershell command to rename my wife’s images. I replace the “DSC” part of the file name with her initials. So lets say that you wanted to replace the “DSC” with “ABC”, a simple way would be with the following command:

Get-ChildItem -Filter “DSC*.*” | 
  Rename-Item -NewName {$_.name -replace ‘DSC’,’ABC’ }

That will get all of the files in the current folder that match dsc.* and pass that list to the rename-item command. The rename-item will iterate through that list and replace all of the occurrences of “DSC” with “ABC”. If you only shoot JPEG files, you can filter on “DSC*.JPG”. We shoot RAW plus JPEG, so we used the wild card extension to get all of the files.

Now can I store her images with mine and they all get backed up to multiple places.  I also back them up to multiple cloud providers, because sometimes it falls down.

If you want to explicitly do just a couple of file extensions, you can use the -Include option instead of -Filter. When you use the -Include option, you must also use the -Path option and set the path. One would be like this.

Get-ChildItem -Path .\\ -Include *.jpg, *.raw | 
  Rename-Item -NewName {$_.name -replace ‘DSC’,’ABC’ }

Modifying test data for privacy

Sometimes I get actual live data from a client to track down a bug that only happens with their data.  That data will contain student records and we don’t like to have live student laying around.   We can use TDE to encrypt the data at rest, but if I’m sharing that data with other developers, I want to scrub identifying details from data set.

For the most part, I just need to replace the first and last names from student table.  I could set both the first and last names to “Gank“, but if every record looks the same, it can be hard to see how the bug manifests itself.  I could set both attributes to the record id value for the record, but I find that hard to look at after a while.

What I end up is writing some sort of reubenizer code.  The reubenizer changes the first and last names to some variation of “Reuben”.

The Patron Saint of all that is Reuben.  Dave Madden as Reuben Kincaid

Let’s create a fake table to represent the student data to modify.

declare @Student TABLE
(
    RecordID varchar(4),
    FirstName varchar(80),
    LastName varchar(80),
    Gender char(1)
);

-- Push in some fake data
insert into @Student (RecordID, FirstName, LastName, Gender) values (1, 'Joe', 'Smith', 'M');
insert into @Student (RecordID, FirstName, LastName, Gender) values (2, 'Joel', 'Smith', 'M');
insert into @Student (RecordID, FirstName, LastName, Gender) values (3, 'Jane', 'Smith', 'F');
insert into @Student (RecordID, FirstName, LastName, Gender) values (4, 'Linda', 'Tokken', 'F');
insert into @Student (RecordID, FirstName, LastName, Gender) values (5, 'Samantha', 'Queen', 'F');
insert into @Student (RecordID, FirstName, LastName, Gender) values (6, 'Steve', 'Burton', 'M');
insert into @Student (RecordID, FirstName, LastName, Gender) values (7, 'Doug', 'Francis', 'M');
insert into @Student (RecordID, FirstName, LastName, Gender) values (8, 'Linda', 'McLinda', 'F');
insert into @Student (RecordID, FirstName, LastName, Gender) values (9, 'Paul', 'Davis', 'M');
insert into @Student (RecordID, FirstName, LastName, Gender) values (10, 'Ann', 'Davis', 'F');

Running those statements will generate a result set that looks like this

RecordID FirstName        LastName         Gender
-------- ---------------- ---------------- ------
1        Joe              Smith            M
2        Joel             Smith            M
3        Jane             Smith            F
4        Linda            Tokken           F
5        Samantha         Queen            F
6        Steve            Burton           M
7        Doug             Francis          M
8        Linda            McLinda          F
9        Paul             Davis            M
10       Ann              Davis            F

The first thing I do is create a table with a set of surname prefixes.  These prefixes will be used with the string “Reuben” to create the new last names

 -- Create a table with some surname prefixes.  
-- We'll pick the prefix from the last digit of the record id of the student.
-- We only do this so we don't have to look at the same name for every row

declare @Reuben TABlE
(
    RecordID varchar(4),
    LastName varchar(16)
);

-- Pick 10 different prefixes
insert into @Reuben (RecordID, LastName) values ('1', 'Mc');
insert into @Reuben (RecordID, LastName) values ('2', 'de ');
insert into @Reuben (RecordID, LastName) values ('3', 'Del');
insert into @Reuben (RecordID, LastName) values ('4', 'St ');
insert into @Reuben (RecordID, LastName) values ('5', 'Van ');
insert into @Reuben (RecordID, LastName) values ('6', 'Le ');
insert into @Reuben (RecordID, LastName) values ('7', 'La');
insert into @Reuben (RecordID, LastName) values ('8', 'Lo');
insert into @Reuben (RecordID, LastName) values ('9', 'O''');
insert into @Reuben (RecordID, LastName) values ('0', '');

Now it’s time to create the update statement to reubenize the names. To get the surname prefix, we’ll get the last digit of the record id. That will slice up the students into 10 different sets of last names. There are other ways of doing this, this one is quick and simple. You could do the same thing with the first name, but in this case, I’m just going to use “Reuben” for the boys and “Reubenette” for the girls, and tack on that last digit.

To make the code a little cleaner, I use a Common Table Expression (or CTE) to create a calculated field for the last digit of the record id. If you are not familair with CTE’s, they let you build temporary result sets that only exist within the context of the SQL expression that they are in.  I blogged about using a CTE here and there.

That update statement would look something like this

-- Using a CTE allows us to calculate the last digit just once
WITH cte (recordid, offset) 
     AS (SELECT recordid, 
                RIGHT(Cast(s.recordid AS VARCHAR), 1) AS OffSet 
         FROM   @student s) 
UPDATE s 
SET    s.lastname = Concat(r.lastname, 'Reuben'), 
       s.firstname = CASE s.gender 
                       WHEN 'M' THEN Concat('Reuben-', cte.recordid) 
                       ELSE Concat('Reubenette-', cte.recordid) 
                     END 
FROM   cte 
       JOIN @student s 
         ON cte.recordid = s.recordid 
       JOIN @Reuben r 
         ON r.recordid = cte.offset; 

After running that update statement, the result set will look like this

RecordID FirstName        LastName         Gender
-------- ---------------- ---------------- ------
1        Reuben-1         McReuben         M
2        Reuben-2         de Reuben        M
3        Reubenette-3     DelReuben        F
4        Reubenette-4     St Reuben        F
5        Reubenette-5     Van Reuben       F
6        Reuben-6         Le Reuben        M
7        Reuben-7         LaReuben         M
8        Reubenette-8     LoReuben         F
9        Reuben-9         O'Reuben         M
10       Reubenette-10    Reuben           F

The records are no longer recognizable and are distinct enough to allow me to debug the problem. This doesn’t work for every kind of data element, but it allows me to work with and share live data with out displaying any personal identification.

Found the cause for ADB error message “Could not open interface: e00002c5”

Frustration with Computers

Frustration by Peter Alfred Hess

I spent way too much time tracking down a problem that prevented ADB on my Macbook from seeing my phone.  While at the Xamarin Evolve conference last week, I hooked up my Nexus 6P to my Macbook Pro to try some Xamarin.Forms code.  I connected the phone and checked the box on the phone that prompted to allow debugging.  The Xamarin Studio IDE did not see the phone.

So I opened up a terminal window and start issuing ADB commands.  If you don’t do Android development, ADB stands for Android Debug Bridge and it provides the communication channels that allow a development tool to talk to an Android device or emulator.

I ran the command “adb devices” and it came back with “no devices”.  To get more information, I ran the command to stop the ADB service and restart it:

adb kill-server ; adb devices

That generated the following output

List of devices attached
* daemon not running. starting it now on port 5037 *
adb I 2192 55546 usb_osx.cpp:259] Found vid=18d1 pid=4ee2 serial=8XV7NXXXXXXXXXXX
adb I 2192 55546 usb_osx.cpp:259]
adb E 2192 55546 usb_osx.cpp:331] Could not open interface: e00002c5
adb E 2192 55546 usb_osx.cpp:265] Could not find device interface
* daemon started successfully *

The 8XV7NXXXXXXXXXXX is an obfuscated version of my phone’s serial number.  So ADB could see that I had a device connected.  The Android File Transfer app could also see the phone, so the connection was there.  Just something was interfering with ADB.

Since I was at a Xamarin conference, I grabbed a Xamarin Android engineer and we started digging in.  First thing was to use his phone and cable, to rule out my phone and/or cable as being the problem.  We saw the same problem with his phone.   So we tried the obvious steps:

  • Rebooted the Macbook.  Nope
  • Used the other USB port. Nope
  • Downloaded a new copy of the Android SDK. Nope
  • Ran an Android emulator.  That worked, indicating the problem was between the USB port and ADB
  • Grabbed a more senior Android engineer who told us to go look at the ADB source code.

The engineers had classes to assist with so it was down to just me.  I went and looked at the source code to the usb_osx.cpp unit from ADB on Github.  The line numbers didn’t match up exactly, but the error was that it literally could not open the USB port.  That meant another process had it’s greedy little mitts on my USB port

I rebooted the Macbook in safe mode.  That would run OS X with out the 3rd party apps.  Sure enough, ADB was able to connect just fine.  That was the first clue, then some people in real life and on the Internets suggested that it may be a tethering app.

Apparently at some point last year, I installed EasyTether,  I don’t use it, but had neglected to uninstall it.  And it’s documented on the EasyTether FAQ page that it will break ADB’s connection with devices.  I pointed Finder to /System/Library/Extensions and sure enough, I had EasyTetherUSBEthernet.kext installed.  I could have used kextunload to just unload the EasyTether extension, but I decided to just yank it out.  I dragged it over to the Trashcan and rebooted.

I plugged my phone in and ADB saw it.

This made me happy

I could use my phone for debugging again.  I use Android emulators about half of the time, but when I want to see how the app behaves on actual device, there’s no beating the real thing.  Plus, debugging touch on a Mac just plain sucks.

I’ve been using Vysor to mirror the Android screen to the desktop and it works great.  I can use the actual device’s screen or control it from the Macbook.    If you are doing a presentation and want to show what is running on your Android device, get Vysor.  It’s a Chrome and and uses ADB, so it works on Mac, Windows, and probably Linux.

Adding Google Play Services to Visual Studio Android Emulator

Out of the box, the Visual Studio Android Emulator (and the Genymotion emulator, and the Xamarin Android Player) does not support Google Cloud Messaging (GCM) push notifications.  The reason for this is that GCM is part of the Google Play Services.  And the Google Play Services are not included in the virtual machine (VM) images that the Visual Studio Android Emulator uses.

The typical Android device starts with a base Android stack that comes from the Android Open Source Project (AOSP).  Device OEMS (Samsung, Huawei, LG, etc) then license the Google Play Services from Google.  On top of that, the OEMs add any customizations that they do to Android.

Google does not allow Microsoft/Genymotion/Xamarin to include the Google Play Services with their builds from the AOSP.  Enough developers have put together versions of the package so that it’s a fairly easy process to install. They are commonly packaged under the name “GApps”.

Run the Visual Studio Emulator for Android from the Start Menu.  If you run it from VS, you may not be able to install firmware packages.  Then create a new VM.  For this example, we’ll create an Android 5.1 VM.  I tried this with Android 6 and it did not work with the GApps packages that I was able to obtain.

Emulation Manager

If you are using an existing VM, you’ll need to know which CPU architecture or ABI that the VM is running under.  Thanks to a tip from the nutty people at Intel, you can execute an ADB command to see what is on board.

adb shell getprop ro.product.cpu.abilist

Also see the documentation for the Build class.

SInce we created the VM, we know it’s Android 5.1.  If you were working with VM and were not sure of the version, you can check via the Android Settings app or from the command line with the adb command.

adb shell getprop ro.build.version.release

While we are checking stuff with ADB, the following command will return the SDK version

adb shell getprop ro.build.version.sdk
Results from the ADB commands

Results from the ADB commands

First up is the installation of an ARM translator. The VS Android Emulator gets its speed by running an x86 version of Android. The Google Play Services are usually packed up already compiled for ARM. The ARM translator lets ARM code run on an x86 image. This is usually packaged up in a .zip named ARM Translation v1.1.

Installing is easy, drag the .zip on to a running Android VM and follow the prompts.

The dialog that appears after drag and drop the ARM Translator package onto the Android VM

If it didn’t reboot the VM, reboot it to be safe.  Multiple web sites have a copy of this file.  I downloaded one from the Tech Bae blog.

ARM Translator installed

Since we have Android 5.1, we need a GApps package for Android 5.1.  There are a few places where you can download a package from, but not all of them may work.  I was hoping to to use the packages on the Open Gapps project.  None of their packages would install into my VMs.  They all reported an invalid folder error message.

The file sets available from TeamAndroid should install without any problems.  I downloaded one named gapps-lp-20150314.zip.  The “lp” in the file name stands for Lollypop, the code name for Android 5.

Drag the gapps package and drop it on your Android VM.  You should get a dialog like this:

Click the install button and let it do it’s thing.  After it completes, it should shutdown.  Restart it from the Emulator Manager.  After Android starts up, you may see a “Optimizing app X of Y” dialog.  When Android versions upgrade, the apps all need to tuned for the new version.  This is normal.

When that is all done, you should see  the Google Play icon in the app drawer.  Launch the app and provide your Google account information.  If you see an endless busy indicator, let it go for a minute, then close and restart Google Play.

You may see an error message about Google Play services having stopped.  That is normal and should go away once the Google Play services have been updated.

After installing GApps, some (many) Google apps and services will probably crash. Do not be alarmed, that is perfectly normal. Most of the files are out of date.

Get the Google Play app to run long enough for you to login and it will start updating.  To force Google Play to update itself, do the following (from Android Central):

  1. Launch Google Play
  2. Slide out the menu
  3. Tap on Settings
  4. Scroll to the bottom and tap Build version

If a newer version is a available, you’ll see a dialog with that information.

At that point, your Android VM will support push notifications.  You can install of Google Play apps like the Maps application.  These steps were tested with Visual Studio Android Emulator but they should work more or less in the same way with the Genymotion and Xamarin emulators.

The Open GApps page looks like it is a promising location to get GApp packages, they have a list of variants.  Basically each variant has more or less of the Google Apps.  To keep things simple, I wanted to use the stock version.  There is a naming scheme for gapps distributions.  It follows the pattern DistName-abi-version-variant-date.zip or a subset of that pattern

For this example, I had downloaded the full version of the x86 Android 6.0 GApps.  that came down with the following command line.

open_gapps-x86-6.0-stock-20160316.zip

It wouldn’t install, but that is the accepted pattern for naming GApp packages.  It (and the ARM version) error-ed out with an invalid directory message.  Hopefully this will be addressed in an update to the Visual Studio Android Emulator.

This article’s banner image comes from Arena4G.