Quality and Software

My last post and talking about what quality software is reminded me of “Zen and the Art of Motorcycle Maintenance.”

Here is an excerpt from chapter 18:

Phædrus wrote, with some beginning awareness that he was involved in a strange kind of intellectual suicide, “Squareness may be succinctly and yet thoroughly defined as an inability to see quality before it’s been intellectually defined, that is, before it gets all chopped up into words — .We have proved that quality, though undefined, exists. Its existence can be seen empirically in the classroom, and can be demonstrated logically by showing that a world without it cannot exist as we know it. What remains to be seen, the thing to be analyzed, is not quality, but those peculiar habits of thought called `squareness’ that sometimes prevent us from seeing it.”

Please Use Loosely Coupled Pure Subroutines : How To Subdivide Programs

Some people get bent out of shape about superficial style choices in code.  Whenever you work with someone new there is always the “Okay, curly braces should be after a newline” conversation.  I have yet to find someone who writes code in such a way that it makes a difference for me, I don’t care where you put your curlies.  The things we should care about are substantial style choices that make code more reliable and easier to maintain.  For example I think it’s hardly controversial to say that we should name variables so that they match their semantics.  “count” is a better variable name than “c”, just as “widgetCount” is better than “count”.

There is another style consideration in the same vein with which I thought we were all on the same page.  Or at least in short time everyone would be, but it’s been 8 years since I’ve made this revelation and I haven’t seen a change in practices.  So I’m going to cry it out to the world in hopes that people adopt it.  If there was one thing I could tell every programmer that I think would help the State of Software the most, it would be this.

Please use loosely coupled pure subroutines.

Every programming language worth talking about has support for subroutines, all the way down to the assembly level (6502 assembly jsr opcode, c functions, java methods).  We all have to make the decision constantly while programming of when and why to break certain logic into subroutines.  How do you do it?  Let us refer to the Linux Kernel Coding Style document, they have alot of important code with many maintainers, they should have a good idea:

This is more or less the rule I’ve adopted, hopefully you have something similar running through your brain.  Maybe this is controversial, maybe I should be happy stopping here so we can all get on the same page, but no, I’m going to push forward and assume we all agree with some version of this rule.

Different languages have different faculties for limiting scope and mutability, C has Blocks for Structured Programming, Java has Objects with private fields, C++ has ‘const’, etc.  Generally, why do languages have these features?  Because managing context and capabilities in code allows us to work with complex code efficiently by dividing our code into small chunks that are easy to reason about.  When we divide our program into subroutines we can do better by limiting their context and capabilities.  We do this by using loosely coupled Pure Functions.

(For those who don’t know what Pure Functions are, they are functions who given the same input arguments always return the same value no matter the calling context and they cause no side-effect.  Loosely coupled in this context means the subroutines should have the minimal set of information needed to make their calculations, also check out Information Hiding.)

For the purposes of our discussion we’ll say that functions that appear pure by contract are as good as pure.  We don’t care if you have side effects on an object that you created and no one else can see, we aren’t Haskell.

Enough abstract talking, lets bring this principle back to a real language, C#.  C# unlike Fortran or Nim doesn’t have explicit support for pure functions and private methods pass around all the state of the object.  How can we use this principle?  By building our methods from loosely coupled pure static functions:

CalcBar and CalcBaz are loosely coupled pure functions.  What have we gained by dividing our code this way?

  • The temporal coupling between CalcBar and CalcBaz is explicit at the call site in Start, this means Joe Programmer coming back into Start will be less likely to futz that up.
  • CalcBar and CalcBaz can be evaluated on their own merit, all the code and possible data required to understand the code is right between the curlies.
  • CalcBar and CalcBaz are easily and directly testable.  Maybe Foo.Start is difficult to test but there is no excuse not to test CalcBar and CalcBaz if you want to.

Right about now OOP heads are starting to get mad at me.  Cool down my babies, let me explain.  Objects as an abstraction are very stateful, they have public methods with void return types.  If you are working with OOP that’s just how things are done.  I’m not suggesting you write all your code functionally and ditch OOP.  Keep building objects’ public interfaces however you were doing it.  I’m merely suggesting you build those public methods on solid functional footing by dividing them into collections of loosely coupled pure functions.

Now I understand that depending on your situation pure functions might be hard to swing, like I said earlier apparent pure functions are just as good, and loosely coupled functions are better than methods.  You just have to choose the most limiting construct for your subroutine.

If you haven’t read Andrew Hunt’s “The Pragmatic Programmer” I highly recommend it.  It boils down to a list of tips for programming that are useful no matter what tools you are programming with (here is the list tips).  I’m now going to list all of his tips that are apropos to programming with loosely coupled pure functions:

  • DRY—Don’t Repeat Yourself – when you divide your code into loosely coupled pure functions you remove required context which makes your code more reusable
  • Eliminate Effects Between Unrelated Things – this one should be pretty apparent, pure functions are orthogonal by definition
  • Always Design for Concurrency – pure functions factor out state change which makes them easier to reason about in threaded code since they can’t have race conditions
  • Design to Test – public pure functions are easier to test, all you have to do is call them with their input and check the output

If you are already using a functional language, chances are you blew off this article paragraphs ago, but according to TIOBE you probably aren’t.  I honestly think this is the one easiest thing people can start doing to make their code easier to maintain, easier to write, and less buggy.

Automatic F# Compilation in Unity

I was playing around with F# and Unity integration and there are some nice examples out there  but they all relied on compilation systems outside of Unity, build some .dll in Xamarin then drop it into a Unity project.  Cool, but it’s not usable if we don’t hook into Unity’s build system and get a fresh build every time we edit a .fs file.  So, I decided to explore doing just that with UnityEditor.AssetPostprocessor.  When you implement a AssetPostprocessor you get called with a message every time you reimport an Asset.  This happens when the file gets touched on disk.  So what we can do with AssetPostprocessor is recompile a .dll full of F# code whenever a dependent F# file is edited.

Here is example code of that method:

In order to actually use this you’ll need to build a target in MonoDevelop for a .dll that will contain this AssetPostprocessor.  Put that .dll in your project’s Assets.  You’ll also have to make sure that your Unity project has FSharp.Core.dll as an Asset if you want to use your F# code.  I had luck using version 2.3.0.  You can check the version with “monodis –assembly <path to dll>”.  The build rule will have to be made a bit more robust if you wanted to compile something with a more complex dependency tree of course.  If you need help making MonoBehaviours in F#, this will help.

Sorry I didn’t didn’t post this on Github.  There is a considerable amount of work to make this idea handle most cases.  It served my purpose as is, maybe I’ll package it up later.

Adventures with IronScheme pt. 2

So, I got a response on StackOverflow from the creator of IronScheme on how to combine enums.  It was very nice of him, IronScheme is obviously a labor of love for him.  Just as I wrote a conversion of the “hello world” program generator in F#, I now present it in IronScheme with no attempt to reduce the verbosity.

References:

Software Engineering Parables 3: Project Xanadu

xanadu

In the 1960’s Ted Nelson inspired a generation with his papers on the future of information and his coined term ‘hypertext.’  The culmination of his thoughts became Project Xanadu.  Project Xanadu was a mixture of the World Wide Web with version control which in his eyes would solve many of the worlds woes.  His assumption was that many of the worlds problems stem from ignorance and misinformation.  It was ambitious and complete in its appraisal of the problem with exchanging and archiving information.  One killer feature that was noted was the impossibility for dead hyperlinks.  Project Xanadu had a  ~20 year head start on Tim Berners-Lee’s World Wide Web, but alas it didn’t ship in time.

This story comes to mind when in Software Engineering we are designing something and the discussion devolves into a chain of minor considerations that complicate the larger goal.  History shows us that the implementation of the web that revolutionized the world was the system that ignored versions and dead-ends, the flawed version is the one that shipped and brought immeasurable value to humanity.

References:

Reflection Emit with F#

Been playing around with making a Lisp with F#.  I don’t really know F# and I haven’t used System.Reflection.Emit directly yet…  So here is my direct translation of Reflection.Emit Hello World in F#:

Reference:

http://blogs.msdn.com/b/joelpob/archive/2004/01/21/61411.aspx

Adventures with IronScheme

TL;DR; I wasted an afternoon attempting something I think is impossible with IronScheme.

Off and on I’ve been playing around with Lisps that target the CLR.  I was newly energized by articles about writing Lisps in F# so I decided to give it a try.  I met a bit of resistance since I haven’t used OCaml since… 2004.  So instead of working that out I figured if I used IronScheme I’d already have a parser, so I could just skip to the generating CLR part of the problem, but alas I think I’ve ran into a impassable wall.

While trying to use the System.Reflection.Emit library I was perturbed that interfacing with C# libraries from IronScheme were so verbose, unlike the nifty Java interop in Clojure.  Example:

Most of that can be hidden with macros, but it still requires you to specify the type of the instance you are calling the method on and thats annoying.  But that wasn’t the impassable wall.  The impassable wall is I couldn’t figure out how to ‘or’ enumerations.  System.Reflection.Emit.ModuleBuilder.DefineDynamicAssembly takes in System.Reflection.TypeAttributes which are an enum that you are exported to ‘or’, like this type is public and a class.  I tried hard to ‘or’ them with IronScheme using the bitwise-ior function sometimes in combination with clr-cast but I couldn’t get it to work.  I finally gave up when I saw that there was a footnote in the IronScheme documentation that out parameters aren’t supported, that was the signal to me that not everything is possible with IronScheme C# interop and I gave up.  What a waste of an afternoon, I thought for sure it was going to be productive.

Then problem case:

 

Color Blending Exploration

In OpenGL and ShaderLab you can customize the way that blending is done.  In ShaderLab you use the Blend keyword and in OpenGL you would use glBlendFunc or glBlendFuncSeperate.  Notice that ShaderLab can do like glBlendFuncSeperate, you just have to supply more arguments [1].

Someone has developed a nice visualization that can be used to explore the settings for these functions:  http://www.andersriggelsen.dk/glblendfunc.php

Wikipedia also has a nice section on alpha blending that is worth checking out: https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

Checking out the man page for glBlendFunc is always helpful too: https://www.khronos.org/opengles/sdk/1.1/docs/man/glBlendFunc.xml

References:
  1. http://docs.unity3d.com/Manual/SL-Blend.html

Software Engineering Parables 2: Archie Bunker

All in the Family
All in the Family

My next parable comes from Jesse Schell’s wonderful book “The Art of Game Design.”  In the book at some point Dr. Schell starts recounting the story of the development of the hit TV show from the 70’s, “All in the Family.”  So the show had been planned out and they filmed a pilot episode and decided to do some user testing by having some people watch the episode and asking them what they thought of the show.  After having watched the pilot episode people were pretty impressed, they liked it for the most part, except they thought that the show should would be more enjoyable without Carroll O’Conner’s character, Archie Bunker.

If you haven’t seen “All in the Family” before, Archie is the working class patriarch of the family.  He has his progressive daughter and son-in-law living with him.  They often end up in some debate where it is Archie’s old fashioned conservatism vs. his son-in-law’s progressive views.  Archie became an important vehicle for satirizing old fashioned prejudice and the show was wildly successful.  Bottom line, “All in the Family” would have been a load of shit if they listened to the feedback.  Not only that, I’ll say the world would be a worse place without Archie Bunker.

Jesse Schell provided this anecdote as the cautionary tale to balance the call to do user testing.  It is that, but also I see it as a tale that highlights the need of a vision for your work.  Had Norman Lear just been seeking to make money he would have tried to please everyone and thrown out Archie Bunker.  Fortunately for us he was in a position where he could execute his vision and make the show he wanted to make.  This was the product we fell in love with, because it didn’t dilute its raison d’etre.

Software Engineering Parables 1 : Feynman’s Biology Class

Dr. Richard Feynman c/o museumvictoria.com.au
Dr. Richard Feynman c/o museumvictoria.com.au

As software engineers we should always be on the lookout for lessons from other disciplines or from the past.  This industry is young, but also has amnesia.  Over the years I’ve clung to a few stories that resonated with me and formed my opinions on the art.  I’m going to share them in a series called Software Engineering Parables.

One book that is just chalk full of wisdom and can be a source of inspiration for anyone is the book “Surely You Must Be Joking Dr. Feynman”.  It’s an autobiography by the famous physicist Richard Feynman.  Feynman was a bit of a rebel genius who had a knack of seeing the same things as everyone else, but understanding them differently than the average Joe.

I believe the story goes that he was at Princeton, where he was studying physics.  Dr. Feynman had such a curiosity and love of learning that he would often sit-in on classes of different disciplines.  So, he picked up a class on Biology and would show up for lectures and follow along.  I don’t know if he was officially enrolled in the class (it wouldn’t surprise me if he wasn’t), but the story goes that eventually he decides to write a paper on Biology.  He feels like he has something to contribute to the field, so he gets to it and when he is done, since he has befriended some Biology majors while on campus, he decides to get one of them to proof read his paper.  The friend took it overnight then brought it back to Feynman telling him that his idea was fantastic but the paper is written all wrong, no one will take it seriously.  The friend offers to edit the paper for Feynman and to make it more inline with what is expected from a Biology paper.  Feynman agrees and in a couple of days the friend returns with the edited paper and hands it to him to read.  Feynman studied the new edited paper and he confessed that he didn’t understand it anymore, even though he was the originator of all the ideas and the original author of the paper.  The academic field of Biology expected a certain protocol on its papers, but since Feynman studied Physics he wasn’t privy.

There are so many lessons in this tiny story.  One might be that sometimes we are the authors of our own complexity.  It comes to mind today because I find that different companies have different forces at work that attribute to the level of pomp in communication.  In a startup with limited resources it is our goal to keep things simple and efficient.  At big companies, pomp is incentivized to prove your worth and intelligence.