Using the Multilingual App Toolkit with the Microsoft Translator service from Azure’s Cognitive Services

Image courtesy of M. Adiputra via Wikimedia Commons

The Multilingual App Toolkit (MAT) is a great tool for managing your localization workflow with Visual Studio projects.  One of the features is the ability to machine translate your string resources into another language.  You would not want to ship an application that was machine translated, but it provides a jumping off point for the actual translators that will work on your text.  When it was originally released, MAT included support for the Microsoft Translator service,  It just worked right out of the box.

Back in April (2017), the Microsoft Translator service that was part of Microsoft Datamarket was retired.  The replacement is part of Cognitive Services and is hosted in Azure.  The Multilingual App Toolkit (MAT) was using the Microsoft Translator service as the default translation provider for doing machine translations.  If you are using MAT to manage your language resource files and wish to use Microsoft’s translation services, you’ll need to make some changes.  They are documented here, but I’ll mirror the text.

Make sure that you have the latest version of MAT installed.  For VS 2017, you’ll need 4.0.6810.0 as the minimum version.  For VS 2015 or 2013, you’ll need 4.0.180.0 as the minimum version.

Then, you’ll need an Azure subscription.  When you use the Translator Text API, you can pick from a number of pricing plans.  The free plan lets you translate 2 million characters a month, but does not allow overages.  The next plan charges  $10/month per million characters. And it moves up from there.  For most projects, the free tier should be sufficient

You can sign up for an Azure account at http://azure.com.  You’ll need a credit card, but you wont be charged unless you go beyond the free plans.

Once you have an Azure account, sign in to the portal and click the New button.

Type “Translation” into the “Search the marketplace” entry field.  Then select the Translator Text API.

Click the Create button

Select the free pricing tier and fill out the rest of the fields and then press the create button.

Click on image for larger version

After Azure deploys the service, it will take you to the service page.

Under “Resource Management”, click “Keys”.  Copy the value for “Key 1”.  This will be the authorization key that enables MAT to use the translation services.  Azure defines two keys, Key 1 and Key 2.  They will both work.  If you plan on sharing the service to let other people translate your code, you can revoke the key at a later date.

Open the Credential Manager in Windows.  The easiest way to launch this is to press the Windows key or Start Menu button and then start typing “Credential”.  You should see the app in the list of matching items

Select “Windows Credentials”

Click “Add a generic credential” and then fill in the fields.

For Internet or network address, enter Multilingual/MicrosoftTranslator

For User name, enter Multilingual App Toolkit

For Password, enter the authorization key that you copied from Azure.

If you copied and pasted the text, make sure that the text does not have trailing spaces at the end.

Click the OK button.

Entering the Azure key as a generic credential

Once you have added the credentials, the Multilingual App Toolkit editor will now use the translation services from Azure.

 

You can now register for the Dream.Build.Play 2017 Challenge

Microsoft has revived it’s indie game dev contest – Dream.Build.Play and you can register now.

Dream.Build.Play 2017 Challenge

The 2017 Challenge is a six month game contest aimed at indie devs. Teams of up to 7 (individuals can go solo if they desire) can enter one or more of the four categories available:

  • Cloud-powered game – Grand Prize: $100,00 USD
    Azure Cloud Services hands you a huge amount of backend power and flexibility, and it’s cool. So, here’s your shot of trying Azure out and maybe even win big. Build a game that uses Azure Cloud Services on the backend, like Service Fabric, CosmosDB, containers, VMs, storage, and Analytics. Judges will give higher scores to games that use multiple services in creative ways-and will award bonus points for Mixer integration.
  • PC Game – Grand Prize: $50,000
    Building on Windows 10, for Windows 10? This is the category for you. Create your best UWP game that lives and breathes on Windows 10 and is available to the more than 450 million users through the Windows Store. It’s simple: create a game with whatever technology you want, and publish it into the Windows Store. Judges will look favorably on games that add Windows 10 features like Cortana or Inking.
  • Mixed Reality Game – Grand Prize: $50,000
    Ooh, so you want to enhance this world you live in with something a little… augmented? Virtual? Come and join us in the Mixed Reality challenge and build a volumetric experience that takes advantage of 3D content in a virtual space. You’ll need to create your game for Windows Mixed Reality but you can use technology like Unity to get you kickstarted. Oh, and don’t forget the audio to really immerse us in your world.
  • Console Game – Grand Prize: $25,000
    Console gamers unite! Want to try your hand at building a game for Xbox? This category is your jam. Your UWP game will be built for the Xbox One console family, and incorporate Xbox Live Creators Program with at least Xbox Live presence. Consideration will be taken for games that incorporate more Xbox Live services such as leaderboards and statistics.

Teams will be judged on fun factor, innovation, production quality and business aspects of their entry, and winners will be selected from the top three from each of the four categories for a grand final in 2018 where the prizes will be awarded.

Visit the Dream.Build.Play site for more information.  There is a video explaining the competition on Channel 9.

Getting the most out of VMware Fusion 8.5 running Windows 10

I’ve been trying to get the most performance out of my Window 10 virtual machines running on my MacBook Pro through VMware Fusion. I have a Windows 10 virtual machine that I use for software demos and testing beta versions of Windows. It’s been running much slower than you would expect on a 2 3 year old MacBook Pro with a quad core i7.  I’ve collected the following tips (the sources are listed at the end) and they have improved the performance.

From the MacOS Side

Exclude the virtual disks from Time Machine backups.

You’ll want to avoid trying to back up the virtual machines by Time Machine.  If Time Machine is trying to back up the virtual machine while it is being used, it will probably fail to perform the backup and it will definitely throttle the disk I/O.

  • Run the Settings App
  • Open “Time Machine”
  • Click the “Options” button
  • Under the “Exclude these items from backups”, click the “+” button.
  • Select the Virtual Machines folder.  By default, this will be located in your documents folder.  Once you have selected the folder, press the “Exclude” button.
  • Press the “Save” button

If you are running an anti-virus application on your Mac, make sure that it is excluding the Virtual Machines folder

From the Virtual Machine Side

With your virtual machine stopped, you can make some system changes to achieve better performance.  Within Fusion and with the virtual machine open (but not running), open the Settings dialog.  You’ll want to make the following changes:

  • Open “Display” and clear the “Accelerate 3D Graphics” checkbox.
  • Open “Processors & Memory”
    • Set the number of processor cores to a value of n-1 or less, where n is the number of actual cores on your Mac.  My Macbook Pro has a quad core i7, so I run with 2 cores assigned to the virtual machine.
    • Give the virtual machine as much ram as you can, but without starving the host OS.  My Mac has 16 GB, so I split it 50/50.  If you have less memory, remember to leave at least 2 GB to the MacOS OS.
    • Open Advanced Options and select “Enable hypervisor applications in this virtual machine”
  • Open “Hard Disk (SCSI)”
    • Open “Advanced options”
    • Set bus type to SCSI
    • Set “Pro-allocate disk space” to enabled.

There are some settings that are not directly exposed through the settings dialog.  You’ll need to modify the .xmx file directly.  There are a couple of ways of getting at the .vmx file, the clearest technique is documented on the vmguru.com page: “Modifying the .vmx file step-by-step”.

  • Change ethernet0.virtualDev = “e1000e” to ethernet0.virtualDev = “vmxnet3”
    This will change the default network adaptive to an enhanced driver
  • Add the line scsi0:0.virtualSSD = 1
    This will optimize disk I/O for SSD drives.  Only use this if your MacBook has a SSD drive
  • mainMem.backing = “swap”
    May speed up memory swap files
  • MemTrimRate = “0”
    Disable memory trimming, less overhead for the Fusion memory manager
  • sched.mem.pshare.enable = “FALSE”
    Turns off memory sharing between virtual machines
  • prefvmx.useRecommendedLockedMemSize = “TRUE”
    Speed up I/O at the cost of increased memory usage in the host OS
  • MemAllowAutoScaleDown = “FALSE”
    Prevents Fusion from attempting to start the virtual machine with less memory than specified.  This can trigger Windows activation.
  • logging = “FALSE”
    Disabling the logging should speed things up a bit

If you don’t need snapshots, remove them.  When you use a snapshot, disk I/O is parsed through each snapshot.  That will show things down.

 

Resources for these suggestions

  1. VMware Performance Enhancing Tweaks (Over-the-Counter Solutions)
  2. Making Windows 10 inside VMWare Fusion 8.x a bit quicker on OSX 10.11 El Capitan
  3. How to Fix Slow Windows VMs on VMware Fusion 8.x
  4. Excluding the Virtual Machines folder from being backed up by Time Machine (1014046)
  5. Troubleshooting Fusion virtual machine performance for disk issues (1022625)

Free event at Union College: Out-thinking Old School: the Intersection of Play and AI

On Friday, May 26th 2017, there will be a presentation at Union College in Schenectady on Gamification and AI.  It will be in Karp Hall, room 105 and will be held from 12:50 PM to 2 PM.  The presenter is Phaedra Boinodiris (@innov8game),a Senior Strategy Lead for Education/Technology for IBM.  Phaedra will be discussing how Artificial Intelligence is being used to enhance game play.

This event is open to the public.  For a map of Union College, jump to this link.  For directions to Union, enter “807 Union St, Schenectady, NY 12308” into your GPS device of choice.

Phaedra Boinodiris holds 6 patents in the gaming space, was named one of the top 100 women in the games industry and is the co-founder of womengamers.com. Author of Serious Games for Business, Boinodiris started the first scholarship for women to pursue degrees in game design and development in the US. She currently teaches at UNC-Chapel Hill where she is also the UNC Social Entrepreneur in Residence.

 

Debugging devices without displays or debuggers

I’ve been writing firmware for an RFID reader that connects over USB to an Android device.  Our installers will need to upgrade the readers out in the field and they have no way of knowing which firmware has been installed.  The reader, an Elatec TWN4, has a pretty decent API that you write code for, using the C language.  Their API includes a wonderful function called “Beep”.  You pass in the volume, frequency, how to play the tone (in ms), and finally, how long to be quiet after the tone has been played.  So I have been setting the readers to play a few notes on power up.  This allows the installers to know which firmware has been installed.

The original firmware plays the opening notes to “Smoke On The Water”.  Because anything that can produce at least 4 notes can play it.  The following C code will beep it’s way through some vintage Deep Purple

We added some code to the firmware to allow our app to put the reader is a sleep mode.  Our installers will need to upgrade a few devices out in the field, so it was time to change that tune.  By checking a few different sites, we found simplified chord progressions for some recognizable songs.  My choices were restricted to simple note changes, you can’t generated complicated chords from a device that only knows how to beep.  It does that beep very well, but at the end of the day it’s only a beep.

I needed to play something else to let the installers know that the firmware had been updated.  Something short, something simple, something simple.  One of my musically inclined co-workers worked out the opening notes of “The Final Countdown” by Europe.  That song has a distinctive opening riff.  And many cover versions.  Some might say too many,

I found a note to frequency conversion table and used that to create a set of constants for the notes I needed.  That allowed me to specify the beeps with readable note constants, instead of the frequency values. You can get those constants here.  With the use of those constants, you can play the opening notes of “The Final Countdown” with the following code:

When using the constants, the code is much easier to read.  And it becomes much easier to create new song intros. With that in place, the installers can quickly check the firmware version by powering up the RFID reader.  At some point I’ll refactor the code to just read the values from an array.  The current design is easy to setup and read, I may just stay with what works.

Right now, I need to use Elatec’s development tools to push the firrmware out via a simple GUI.  If I could get a command line tool for pushing the firmware out, I could add code to the firmware to return the version number from a query sent over USB.  That would allow me to write a simple app or Powershell script to identify a connected reader, query the version, and then push the update out.  If anyone from Elatec ever reads this, a command line firmware updater would be very helpful.

Decades of using development tools like Visual Studio has accustomed me to being able to use a debugger to step through the code.  Writing code where you can’t visually debug it, requires thinking out side the box.  I can test much of the code by having the reader send back text, but when testing with the device that it will be hooked up to, that would interfere with how they work.  Sometimes you just have to use a different path out of the machine to see what it’s doing.

 

What not to do on a job interview: Pressing the Self-Destruct Button

Image by Tumisu

I’ve been with my current employer for the better part of two decades and I was thinking back to the job interviews that I went on before taking this one.  There were two places that I interviewed at where I deliberately blew the interview because I realized we not compatible.  Before I continue, don’t do what I did.

The first place was a company that was not long passed it’s startup days.  They did web development and they had probably less than 20 people.  A friend of mine had started working there a few months earlier and he was still bullish on the company.  I applied and on his recommendation, I was brought in for an interview.

I met first with the owner of the company.  That part went OK, but I didn’t feel comfortable with the owner.  I couldn’t narrow it down to anything specific, but something just didn’t feel right.  It could have been his personality, it could have been my unease being back in the job market after less than a year at the current position.  I just wasn’t comfortable with him.

I then met with the director of development.  Let’s call him “Sam”.  My interview with Sam started off well, we seemed to hit off.  At that point in time, I knew nothing about web development and had been upfront with that.  They were looking for more of back end coder, so my SQL skills more than made up for the lack of all things HTML.  We talked SQL and performance analyzing and things of that nature.  The more we talked, the looser Sam became.  He started saying negative things about some of the developers on his time.  Nothing in depth,  but totally inappropriate to mention in an interview.  Actually inappropriate to mention at all.

Sam had been a C programmer and loved to write code that was more complicated than necessary.  On a white board, he had written a single line of code that was an unholy mess of functions and pointer arithmetic and array offsets.  It was his standard programming challenge for job applicants.  He asked me to parse it.  And this is more or less what I said

I would fire the person who wrote this code.  It’s an exercise to show clever you are for writing this.  By writing all of the code as a Nested Series of Functions from Hell, you eliminated readability and maintainability from the code.  And just forget about the error handling, there’s no room for it.  If any one part changes with a parameter or return type, the best that you can hope for is that it fails to compile.  At worst, it would continue to run and you would get the wrong results and then spend hours trying to figure out what had changed .

Well that was not answer that Sam was expecting.  He made a big production of going over the code, function by function, pointer by pointer.  He had to make his point, which to be fair, my remarks were pretty rude.  He tried to get me to agree with him that the code was elegant. I politely demurred and the interview was pretty much over.  To no great surprise, they did not call me back.

The next interview was with a larger company.  I was interviewing for a Java developer position.  I had taken some Java courses, but had little real world experience with the language. I was comfortable enough with Borland’s jBuilder Java IDE to talk somewhat about it.  My current job was transitioning from Delphi to Java, so it was a skill I was starting to pick up.  My current employer was big on what was then called the AS/400.  Other than writing SQL queries to an ODBC connection to an AS/400, I knew nothing about the AS/400.

This interview was the type where you spent 20 minutes at a time with a person or small group and then was passed to the next group.  They had told me to plan on 3 hours for the interview.  I met first with the Java people.  That went well.  They understood that my actual Java experience was limited, but I knew the tools they were using and I knew had to write client/server applications.  I then met with the AS/400 people.  Or rather the people would be managing the AS/400 people when they hired the AS/400 people.  They wanted me to be the first person on the team, to port their application from UNIX to the AS/400.

I explained that I was not an AS/400 expert and my level of AS/400 skill could be measured as none.  They didn’t care, they wanted an AS/400 developer and that was where they would put me if I was hired.  I said that I was looking for a Java developer position and I didn’t have the AS/400 skills they were looking for.  They said that would be OK and I could learn the AS/400 as I went along.  They then said that I could move to Java team after being on the AS/400 team for 6 months.

They were either lying to me or they had no idea of what they were talking about.  There was no way that I would accomplished anything meaningful in 6 months.  Between not knowing what their app did and how it was designed with not knowing anything meaningful about the AS/400, 6 months was too short a time period.  And from a business perspective, you are not going to spend 6 months getting a developer up to speed on a technology that no one else knows and then allow him to transfer to another team.  That made no sense.

I was then shuttled off to marketing team and sales team.  They showed me how the app works and how they sold it. They did mention how excited they were to be getting an AS/400 version of their application.  They seemed to think that I was going to be the guy or one of the guys who gave them the AS/400 app.  Either way, it was going to be a non-starter for me.

Finally, I met with the president of the company.  She swore like a sailor and kept switching topics.  At one point she started talking about a delay of some new feature from of the teams. She named each person and described where she thought that person could have had dropped the ball.  She then asked me how I would deal with the problem if I had her job.  We then spent the next few minutes talking about the situation.  I broke it down by timeline.  Was the timeline to add the feature realistic?  Were enough resources available to implement and test the feature?  Did they have a manager measuring progress against the timeline?  The usual management stuff.  It was just very odd that we were talking about a specific problem with specific people.  I ended up working with people that used to work there and they said that development delays was a constant problem.

We then got around to talking about the position.  I said that I came in for a Java position but the job was being pitched as combination AS/400 admin/developer.  And that was not my skill skill.  She said that when they discussed my resume, my current employer’s experience with the AS/400 was more important than any other skill that I had.  I thanked her for her time and finally left.  It was another opportunity where I did not expect or receive a call back.

I have gone on very few job interviews and I handled both badly.  With the first position, I should have made an attempt to parse the Code From Hell and kept my opinion to my self,  It was a programming pissing match and my comments did not move the bar forwards.  For the second one, I should have halted the interview process once I realized that our job expectations did not match up.  Even if you don’t want the job, you don’t want to blow the interview.  People move around and you could interview with some of the same people somewhere else and lose the opportunity for your dream job.  Always do your best in the interview.  If you don’t think that the job is right for you, you can always turn down the job offer.

Using console jQuery to scrape lists from Apple’s developer portal.

Scrape
I needed to grab the lists of registered devices and developers from our company’s Apple Developer portal. Unless I’m being particularly obtuse (an outcome that I never rule out), Apple does not provide any means of exporting the lists.

Apple only allows 100 devices of each type (iPhone, iPad, iWhatever) to be registered as development devices. No matter how many iOS developers that you have at your company, 100 is the limit. And if you remove a device from that list, it still counts towards that total.  Once a year, you can reset the list and carry over the devices that you still need and drop off the ones that are not needed.  To make this easier to manage, I wanted to get a list of the devices and their ids and have the developers pick the ones that they still need.

So I wanted to export that list.  And Apple doesn’t let you export that list.  You can see it on the screen and work with the items in the list, but no export.  I figured that I wasn’t the only person dealing with that limitation so I did a quick search on Stack Overflow and found this little gem.

var ids = ["Device ID"];
var names = ["Device Name"];
$("td[aria-describedby=grid-table_name]").each(function(){
    names.push($(this).html());
});
$("td[aria-describedby=grid-table_deviceNumber]").each(function(){
    ids.push($(this).html());
});

var output = "";
for (var index = 0; index < ids.length; index++) {
    output += ids[index] + "\t" + names[index] + "\n";
}
console.log(output);

To use that code, you would go to the list of devices in the browser. Then open up the developer tools for that browser. For example, in Chrome you would press F12 to open up the developer tools. Staying with the Chrome example, you would click on the Console tab in the developer tools and paste that Javascript code in and then press the Enter key. The code would execute within the domain of the page and generate a two column list of device ids and names.

To understand what that code does, you need to look at how the data is rendered on the page. The device list is stored in a HTML table, with each row looking like this

<tr id="1" tabindex="-1" role="row" class="ui-widget-content jqgrow ui-row-ltr">
    <td role="gridcell" style="text-align:center;display:none;width: 34px;" aria-describedby="grid-table_cb">
        <input role="checkbox" type="checkbox" id="jqg_grid-table_1" class="cbox" name="jqg_grid-table_1">
    </td>
    <td role="gridcell" style="" class="ui-ellipsis bold" title="iPad" aria-describedby="grid-table_name">iPad</td>
    <td role="gridcell" style="display:none;" class="ui-ellipsis" title="c" aria-describedby="grid-table_status">c</td>
    <td role="gridcell" style="" class="ui-ellipsis" title="twletpb659m0ju078namuy8xnv2j0fzt1kytanfz" aria-describedby="grid-table_deviceNumber">twletpb659m0ju078namuy8xnv2j0fzt1kytanfz</td>
</tr>

Looking at the highlighted lines 6 and 9, we can see the device name and device id as the text of table cell tag. Each cell has a aria-describedby attribute to identity the type of value being stored. We can search on the values of the attributes to locate the data that we want. Going back to the javascript, look at the following lines:

var names = ["Device Name"];
$("td[aria-describedby=grid-table_name]").each(function(){
    names.push($(this).html());
});

The first line declares a Javascript array with an initial array element of “Device Name”. The next line performs a jQuery select for all of the <td/> elements that have attribute of aria-describedby with the value grid-table_name. The next part of the statement iterates over the list of matching <td/> elements and uses the jQuery html() to get the text value of the cell and add it to the array. We then can then do the same technique to get the device id and then build a list as a string and finally dump it to the browser’s console.

I also needed to the email addresses of all of our registered developers. The email addresses were not in a table, but part of a list. Each email address is wrapped inside a section element like this

<section class="col-100 ng-scope">
  <p ng-bind="::person.fullName" class="ng-binding">First Last</p>
  <a class="smaller ng-binding" 
    ng-bind="::person.email" 
    ng-href="mailto:first.last@yourcompany.com" 
    href="mailto:first.last@yourcompany.com">
    first.last@yourcompany.com
  </a>
</section>

I just needed the text part from the <a/> element. Getting the email addresses was a simpler version of the code to get the devices. I just a jQuery select on the ng-bind attribute and matched on the value “::person.email”. That ended up being a single line of code to run in the browser’s developer console

$('a[ng-bind="::person.email"]').each(function(){
  console.log($(this).text())
  });

And that’s how you can screen scrape data from a web page that doesn’t provide any support for exporting the data.

Bonus round
The aria-describedby attribute is a commonly used accessibility element used to describe the element that the tag is part of. The “aria” part of the attribute name is an acronym for Accessible Rich Internet Applications. Among other things, it was designed to allow assisted reading devices help parse a page for users with visual difficulties. It’s a good technology to use on your web pages.

Xamarin Dev Days – Latham, NY – Dec 2nd

Looking to start doing mobile app development with Xamarin, but don’t know where to start?  Then we have some good news for you.  Xamarin Dev Days is coming to the Tech Valley.  We’ll be hosting the event on Saturday, December 2nd, at the new Latham office of Tyler Technologies.  While it’s early to announce an event that is 9 months off, it’s still good to get the word out.

Xamarin Dev Days are community driven, hands-on learning experiences geared towards beginner mobile developers to help build, test, and connect native iOS, Android, and Windows apps.  We’ll spend the morning with sessions that introduce Xamarin ecosystem.  This will include an overview of Xamarin, Xamarin.Forms, and using cloud computing through Azure with Xamarin.

There will be a hands on lab in the afternoon that will walk everyone through how to build a Xamarin.Forms app that pulls data down from an Azure hosted database.

Agenda

Time Session
9:00 AM – 9:30 AM Registration
9:30 AM – 10:10 AM Introduction to Xamarin
10:20 AM – 11:00 AM Cross Platform UI with Xamarin.Forms
11:10 AM – 11:50 AM Connected Apps with Azure
12:00 PM – 1:00 PM Lunch
1:00 PM – 4:00 PM File -> New App Workshop

What is Xamarin?  Xamarin lets you deliver native Android, iOS, Mac, and Windows applications using your existing .NET skills and code.  You can build 100% native apps from a shared code base.  If you can do it in Swift, Objective-C, or Java you can do it in C# with Xamarin.

Tickets to this event are free, but you will need to register in advance.  Visit the Latham Xamarin Dev Days page and then click the register button.

If December is too long to wait, check out the other locations on the Xamarin Dev Days home page.  If you want to host your own Dev Days event, then click here.

A Xamarin port of the usb-serial-for-android library

Back in January, I ported the excellent usb-serial-for-android library from the Java source code to Xamarin C#.  We have an Android application that needs to use an external RFID reader.  The reader is an Elatec TWN4 RFID reader and it can work as virtual comm port over USB. To use that reader, I needed a general purpose serial over USB library.  I ended taking a very good one from the open source Java community and porting it over to C#. That ported library is up on Github under the name UsbSerialForAndroid.

Out of the box, Android doesn’t come with a lot of support for serial port devices.  It’s probably not a common use case.  Starting in Android 3.1, support was added for USB Host mode to allow access to USB devices from Android apps.  There was enough of a need for serial devices that Mike Waverly wrote a very good library in Java named usb-serial-for-android.  It supports many of the common USB serial chipsets.  So I wanted to use that.

With Xamarin Android, you have basically two ways of consuming Java libraries.  You can use them directly by creating a C# to Java wrapper and bundling the .jar file with your project.  While that can work, and work very well, it can also be a bit clunky and you can hit some issues mapping the Java method calls to C#.  Another group had gone down that path.  They implemented a wrapper for the .jar file and added some helper classes.  It looked like their project was abandonware and was not using a current version of Mike’s code.  You would also have the limitation of not being to debug into that code library.

If you have the source code, you can port the code from Java to C#.  I decided to go down that route.  It took a couple of days, but I was able to port all of the Java code to C#.  It went over more or less as is.  Some changes needed to made because reflection is handled differently in C# than in Java.  There were also a bug in Xamarin’s API access code that mangled the array handling in some Java code.

In Java, ByteBuffer.wrap(someBuffer) allows for two-way updating of a Java array with a stream buffer,  A bug in Xamarin’s API mapping tool emits code that allocates a new buffer when you call Wrap.  Changes made to the ByteBuffer are not reflected in the original byte array.  This is logged in Xamarin’s Bugzilla database here and here.

In the CdcAcmSerialPort.Read() method, defined here in C# and here in Java, I needed to add a line to copy the new array back over the original array.

In the original Java (edited) code, we had this
final ByteBuffer buf = ByteBuffer.wrap(dest);
if (!request.queue(buf, dest.length)) {
throw new IOException("Error queueing request.");
}

final int nread = buf.position();
if (nread > 0) {
return nread;
}

In the C# code, I added a call to BlockCopy to overwrite the original byte array with the updated contents
ByteBuffer buf = ByteBuffer.Wrap(dest);
if (!request.Queue(buf, dest.Length))
{
throw new IOException("Error queueing request.");
}

int nread = buf.Position();
if (nread > 0)
System.Buffer.BlockCopy(buf.ToByteArray(), 0, dest, 0, dest.Length);
return nread;
}

I also replaced some integer constants with enumerated types where it made sense to do so. I also took the C# helpers from the LusoVU repository.

As much as I could do so, I followed the code structure’s of the Java library.  When changes are made to that library, I can view those changes and make the equivalent changes in the C# code.  The end result was that I ended up with all C# code and it works great.

The TWN4 has become my favorite RFID reader.  It’s very good at reading different card types and you can write custom firmware for it in C.  I used it in another project where it had to work with a custom protocol to with the host device.

TWN4 reader

And then my blog was defaced

A couple of weeks ago my blog was defaced through a security hole in WordPress. About 800,000 blogs were hit via something called the REST-API exploit. I saw something like this on the main page of my blog

Hacked message

I blurred out the identifying text and graphics.  No sense giving any credit to the ones behind the hack.  I actually support their cause, but not this kind of stuff.

At that point I had no idea what had happened.  I figured it was either someone had hacked the OS or someone had hacked WordPress.  I went in and deleted the post and then my blog stopped working.  I was too busy at the time to deal with it, so I just shut the blog down.  I was running a virtual machine up in the cloud and I had installed Linux, MySQL, and WordPress manually,  I recommend doing that at least once.  But no more than just once.  I had to manually edit a bunch of files so that my WordPress site was the default site for the machine.

I then found out that the problem was caused by a security hole in WordPress 4.7.0/4.7.1 that had since been quietly patched in 4.7.2.  My blog was not set up to automatically update WordPress, so it was one of the 800k that had been hit.

Paris Tuileries Garden Facepalm statue

I should have had automatic updates turned on

I had backups of the blog, so I knew I could get it back up and running.  I decided to take some time and start over again.  While it would be easy to just delete the posts, there reports that Remote Command Execution (RCE) attacks were being attempted through this exploit.  I don’t think that I had any plugins that would allow a RCE attack, but I decided to err on the side of caution.

I looked at some of the sites that offer WordPress hosting, but I decided to do it in a VM again.  The price is roughly the same as some of the cheaper hosting plans, but I would have full control over the site.  I would also have full responsibility for keep it up and running too, there’s never a free lunch.

Instead of installing everything myself, I used Bitnami’s one-click WordPress installer.  In the Azure marketplace, Bitnami has an installer that will install the server edition of Ubuntu 14.04 LTS “Trusty Tahr” with all of the bits to run WordPress.  The “LTS” designation is important, it stands for Long Term Support and this version will be supported until April, 2019.  It included the PhpMyAdmin tool for managing MySQL databases.  I created a new database and restored the table with the posts from my old blog.  I backed up the new blog database (just to be safe).  I tried installing all of the rows from that table into the new blog, but that broke the blog.  Something in the hacked posts was probably doing something bad.  I restored the new blog from the backup and then exported the old blog posts up to the date that it was hacked.  I restored those records and the blog was happy.

So the blog was more or less ready to go at this point.  I installed VaultPress and it immediately blocked people trying to do things to it.  It wasn’t really public yet.  It had a DNS name visible to the outside world, but not my DNS name.  I went to my DNS registrar (GoDaddy) and updated the DNS records to redirect rajapet.com from the old VM to the new one.   With the DNS updated, I was able to do something that I had meaning to do for a while: Add SSL/TLS support and enable HTTPS for the blog.

I’m not doing anything that really needs HTTPS, but the browsers are really pushing for sites to use HTTPS.  In the old days, that meant buying a SSL certificate, installing it, configuring your site to use it, etc.  The people behind Let’s Encrypt have changed that story.  It’s a free an open Certificate Authority that provides free certificates to allow anyone to enable a trusted HTTPS site.  All you need is to own your own domain (and have some level of access to the web server).  They provide the cert and the tools to install and update the certificate.

Let’s Encrypt is a free, automated, and open Certificate Authority.

It was just slightly tricky to get the Let’s Encrypt tools to work on my site.  Bitnami’s installation of Apache and WordPress are slightly different than standard installs.  Not wrong, just different enough that automated Let’s Encrypt tool didn’t complete it’s task.  The documentation on the Bitnami site is very good and walks you through the Let’s Encrypt manual steps.  I set the certificate to use rajapet.com rather than www.rajapet.com.  The “www.” is archaic and I don’t need it for this site.  With good stuff like Let’s Encrypt, there is really no excuse not to use HTTPS any more.

I edited the httpd-app.conf file that Bitnami uses in place of the .htaccess file to redirect HTTP and www.rajapet.com requests to the simpler https://rajapet.com. If you are running Bitnami’s WordPress install, it’s pretty easy to change and is more or less documented here.  In /opt/bitnami/apps/wordpress/httpd-app.conf, you’ll want to add the following lines after the line with “RewriteEngine On”:

 

    #SSL redirection
    RewriteCond %{HTTPS} !on
    RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

    # Replace www.xxxx with xxxx
    RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
    RewriteRule ^(.*)$ https://%1/$1 [R=301,L]

 

The first block takes any request that uses HTTP and replaces it with HTTPS. The second block strips out the “www.” from the start of the URL.  You can still use HTTP or “www.”, but you’ll be taken to https://rajapet.com each time.  With a 301 redirect to let search engines know that this is a permanent change of the link.  After making that change, remember to restart Apache.

So the blog is back, I only restored the posts, past comments may or not come back.  I installed the usual security plugins, but I need to install the code formatting plugins.  I picked a new theme that’s pretty basic and mobile friendly.  That will probably change, it’s kind of on the “blah” side.  At least it is as I have it setup.  I used to have an about page that had a form for entering comments.  That was a SPAM magnet and I had disabled it just before the hack attack.  If you want to get in touch with me, the best bet is through one of the social media links in the sidebar.