Coming Back to Unity After Pause

Screen Shot 2014-10-10 at 4.46.16 PMI’ve had a few side projects going on for a while.  One was a clone of Bump n’ Jump in Unity to get a feel for Unity development.  I set it aside to work on other things, but because of developments at work I’m going to have to start playing with Unity again so I picked it back up to get back into the groove.  Today I’ve written a Breakout clone and I’ll record a few observations about Unity.  You can find the Breakout code as of this article: Breakout or checkout HEAD in case I’ve decided to do more cool things with it: Breakout HEAD

  • It’s hard for me to be so removed from performance.  For instance there is no way to inline a method.  I guess .NET and mono have it now but it hasn’t made it’s way to Unity 4.6 yet.
  • It’s difficult to layout the code conceptually.  In the Breakout clone I put most of the logic for the game into the BallController.  He is the guy that knows the most and is the controlling factor for the game but it feels weird for him to be the arbitrator of the score and game state.  I think an empty GameObject for the game who listens to the balls events would feel cleaner to me.
  • The 4.6 UI tools are very nice.  It’s easy to get UI pixel perfect on screen.
  • On that note, I had a hard time getting the coordinate space to pretend to be 640×480.  Maybe I was fighting the system a bit but 640×480 seems convenient for me to think about when making a 4:3 game.
  • MonoDevelop is horrible.  I had to turn that off, it can’t even kill and yank correctly.  I’m now using a combo of Emacs/omnisharp-mode.  Maybe I’ll make a blog post about this setup later.
  • There is some magic that has to be done to get Unity to behave well with Git, this guy on StackOverflow laid it out nicely: How to use Git for Unity?
  • Breakout has such a well define physics, it was interesting to consider if I should use Unity’s 2D physics or to roll my own.  I eventually opted for rolling my own instead of trying to hack the physics engine.  It would have been nice to be able to use the Colliders by hand without the physics but I didn’t see a nice way to do that, so I had to throw them out as well.
  • Levels in Scenes or XML?  I just have one level right now but it was somewhat tedious to lay it out by hand in the editor (especially because of the coordinate system).  But it was convenient to be able to see the results and if I had 30 different levels to able to click through them and see them would be helpful.  It seems like I haven’t found a good answer in Unity about when to use Scenes and when to use XML.

Now I’m feeling a pang to post this for the Unity Web Plugin so we can all play with it.  Hopefully I’ll get around to that to spare you the trouble of loading up Unity.

Mike Acton’s Data Oriented Design

Mike Acton, the Engine Director at Insomniac Games, had a talk from CPPCon get posted on Gamasutra yesterday.  Here are some thoughts, if you missed the video you can check it out here:

Mike Acton’s Data Oriented Design

Mike has a very interesting job.  Many developers have the luxury of accepting inferior performance for ease and speed of development.  His job is at the extreme other end of needing to geek out as much performance as possible from a machine.  So looking from his point of view provides a very different take on what we do.  Some might think his talk is extremist, but remember his job is an extreme.

The thesis of his talk is that it isn’t wrong to write code that is designed for computers instead of humans.  He warns us of overcomplicating problems by translating reality to computers while trying to maintain human understanding of reality with real-world models.  To help us look outside of our ingrained real-world model thinking he gives us an alternative model of looking at software, its just transformations on data.  Data is the focus, not code.

This doctrine isn’t necessary for each software engineer to follow, but it definitely is a helpful lens to evaluate our software.  Looking at our software as data transformations is an aide we use all the time when we write unit-tests.  Over prescribing real-world models could be a problem in certain domains too, it’s hard to know when a tool is helpful if we don’t consider the merits of the alternative.

How To Poke 40,000 Facebook Users in 2005

Facebook was just coming out while I was at the university, or The Facebook as it was known then.  Originally it was rolling out to university students first.  So for better or worse I got to experience a little bit of that while I was still in school.  It had a few features you don’t find in Facebook anymore, like a picture of The Fonz in the top right-hand corner (I’ve added the black and white image for context, the blue is what used to be there):

trueFonzieIt also had a feature “Poke” where you could poke people, then the next time they logged in it would tell them who had poked them.  Facebook had a FAQ at the time and one of the questions was about the meaning of the poke.  They claimed that it had no intended connotation, they just put it in there to see how people would use it.

Well without trying to examine too much why a university student would want to poke all 40,000 students in their school, merit debates aside, let just look at how to do it.

The file I grabbed it from was dated September 28, 2005.  This is really simple, we are just going to login, then synchronously generate poking URLs and call them.  You’ll see in genloginurl that the domain at the time was “”  Every school had its own domain.  Also you’ll see that we are generating the URL with a number that represented the user we are poking.  This is why it was so easy to do since everyone got a number assigned to them when they setup an account, so we could just roll through the numbers.  The other little caveat that makes this just out of the reach of dead simple is that you’ll notice I was using the CookieClient library.  When you logged-in; cookies were set in your browser and they were read when performing things like poke, so I needed a way to keep them around.

The response for the poking was pretty amazing.  Here are some incidents that resulted:

  • I was offered a pet rat from someone who didn’t want it anymore.  I had mentioned on my page that I liked rats.
  • I got a job offer from a local web hosting company.  I can’t remember who they were, the guy was really cool though.
  • I got alot of homophobic messages.  The undefined poke feature had a very clear meaning to these people and they didn’t like being on the receiving end of a poke from another man.
  • The next day I would see people get shocked when they saw me as my face snapped them into recognition.
  • I got 2 messages from a girl, they went something like this: “Hey, I saw that you poked me ;-)”  “Hey!  I saw you poked my roommate too!  What’s the big idea!?”
  • Someone called my parents house trying to get me to show up at the Registrar’s Office at 9PM at night, creepy.
  • I had one of my DJ mixes linked on my Facebook page, the first track was Milosh – Time Steals the Day.  From the 40,000 I poked, only one actually would mention the mix later and she and her friend would became fans.  Years later I think she eventually would do the Peace Corps and track me down online to get an emergency backup of the mix while she was abroad.

I remember the next month seeing there was another kid that did the same thing but he did messages and he did it across universities.  I think he asked for a buck… that might just be my imagination wanting to remember it that way.  I also saw later that he was doing a tour of university campuses.  I kind of regret not going to see what he said, “Hey, I’m the guy that asked you for a buck on The Facebook, ask me anything.”

Smart Pointer Alternative To shared_ptr

shared_ptr brings convenience to memory management but it also comes at a cost.  It blocks the use of covariant return types.  Also it makes interfacing with other environments difficult, for instance if you wanted to make a C wrapper around your C++ code.  shared_ptr keeps an internal reference count of the object, but if we switch that around and put the reference count inside of the object we could choose when to use smart pointers without losing a consistent count.  That way we can keep our interfaces clean of smart pointers, but use them internally to get their benefit.  Without further ado I present refcountptr with the disclaimer that I wrote it for this post, I’ve only done minimal testing.


 example of usage:


Programming the Atari ST 20 Years Later


One week ago everything I knew about the Atari ST could fit in this sentence: “The Atari ST was a computer.”  The past week I have studied it and have built the knowledge required to start programming on it.  Follow me and I’ll show you what I learned.  I will show you how to write software in 68k assembly on one of the iconic computers of the 80’s.

Background info on the Atari ST:

  • The Atari ST was a reasonably priced computer released in 1985
  • It’s competitors were things like the Macintosh, Apple //GS, Amiga, 80286 IBM PC Compatibles
  • It ran on the Motorola 68k chipset @ 8mhz
  • The 68k chips have 24bit addresses which gave it access to a possible 16M of memory (max size on the Atari ST is 4M)
  • The Atari ST shipped with Atari TOS (The Operating System)
  • It could be run in 3 different resolutions: 640×400 in black and white, 640×200 in 4 colors, 300×200 in 16 colors
  • You had access to 2^9 different colors but a max of 16 are used at once [thanks for the correction Chris]
  • It came with a GUI called GEM

Setting Up The Emulator:

You didn’t think we were actually going to program on a real Atari ST did you?  Who has room for that sort of stuff?  Apparently not us says my wife.  For this tutorial we are going to use Hatari.  I didn’t shop around, but this emulator seems pretty awesome.  I recommend it.  If you are on Mac OS X it’s easy to install with macports (‘sudo port install hatari’) and on Ubuntu it should be in apt-get (‘sudo apt-get install hatari’).  Macports gave me a valid bootable install that runs EmuTOS (a free replacement for Atari TOS), but Ubuntu gave me an install that can’t boot because it can’t use the EmuTOS image.  Don’t worry about that, we are going to use the easily obtainable Atari TOS anyways.  Maybe everything works with EmuTOS, I don’t know I didn’t try it.

Here is a screenshot of what you’ll be seeing right about now when you start Hatari:

Screen Shot 2014-09-23 at 7.45.52 PMOkay, right about now, if you are like me you are probably asking yourself what is up with the crappy borders around the Desktop, I know!  I mean, I don’t know.  The Commodore 64 had that too and the physical machine had that, we’ll be removing it, don’t worry.  Here is the next steps to getting your Hatari installation souped up:

  1. Download Atari TOS 2.06, the one I’m using is named Tos206us.img
  2. Make an empty directory on your computer to pretend to be your Atari Hard Drive
  3. In Hatari press F12 to get to the settings
  4. Go to the ROM section and choose the TOS 2.06 image
  5. Go to ‘Hatari Screen’ and select Atari Monitor Mono (trust me! we can play with color later)
  6. Go to “Hard disks” and select the directory you made under “GEMDOS Drive”
  7. Then go back to settings and select the “Reset machine” radio button and click the “OK” button

Now your Hatari should look like this:

Screen Shot 2014-09-23 at 8.01.09 PM*High-Five* now your system is setup and ready to get the dev tools going.

Setting Up The Development Tools:

For this bit we are going to use the assembler called DevPac3.  I’ve heard TurboAss is good too (giggle), but I found more resources talking about DevPac3 thanks to the Demoscene.  The version I’m using is DevPac 3.10 by HiSoft.  Finding it shouldn’t be difficult with an internet search.  If you find it as an “.st” file you are in luck, that’s a floppy image, you can skip the next section.

If you download Devpac and it comes as a directory of files:  You could try to be a wiseguy and drag the files to your directory you selected as your hard-drive then run DEVPAC.PRG from there like I tried, but you’d be wrong! (maybe).  My installation was giving me troubles because it wanted to be run from the floppy drive.  You may get errors when starting DevPac.  In this case, go into the Hatari settings (F12),  then go to ‘Floppy disks’ and make a new floppy image, put it into a floppy drive, then drag your DevPac files into that floppy disk and run DEVPAC.PRG from the floppy from now on.

So, open up your disk or folder with DEVPAC.PRG and double click it to run.  You should now be seeing this:

Screen Shot 2014-09-23 at 8.23.22 PMYour default settings for DevPac should be good.  There is just one thing we want to make sure is set.  If you go to Options -> Control…  make sure that Format is set to “ST RAM”  This is going to make it so that when we assemble it puts the code straight into memory so we can execute it directly.  Another option that will be useful later is selecting “Atari executable” which will be necessary to make a PRG file.

Running Code:

Okay, this post is getting long so for those impatient types I’m going to give you the code to type in:

You could always just save this code to disk in your hard drive folder then load it from DevPac but then you would miss out on the satisfying clicky sounds that the Atari ST makes while you type.

Now that you have the code in memory lets assemble it!  Go to Program -> Assemble, watch the fun assembling happen, if you have errors in your transcription you’ll see them here.  Now assuming that went well you’ll have your assembled program in memory too, lets just run it by going to Program -> Run.  And now you should see this:

drawlineWhoa!  Watch out, don’t get angry!  We actually did something, look in the top left-hand corner… no closer, look there is a diagonal line there!  We did that!  Okay, you are right I should have warned you at the beginning of the article that the program was lame.  But it is understandable and digestible for a first program.  Press any key to exit once you are done soaking in the wonderful line.

Explanation of the Code:

Here is a quick rundown of the code.   In the first section of the code we are just creating aliases to offsets and commands.  All of those constants I got from the Atari ST Internals book [see references].  Next we jump to the subroutine (jsr) initialize which sets us up for Super User mode.  I didn’t actually test to see if Super User mode is required for the demo but why not go Super, we can handle it.  When initialize returns (rts) we are going to call Line A init.  Line A is the package of drawing calls we are using.  There are other ways to draw on the Atari ST.  I think these are the fastest least portable graphics calls (any greybeards to correct me?).  The weird thing about that call is that it is happening with the dc command which is just putting that data directly into our code instead of using some sort of call opcode.  If you check out the documentation for init you’ll see that it sets up register a0 with a struct we can edit to set parameters to other Line A calls.  We do that in our next section to setup the parameters to the draw line function.  Then we finally call the draw line function on line 35.  Then we are going to use a GEMDOS function to wait for a key press (function 7 of the GEMDOS which is trap 1) on line 37-38.  Now that we have a keypress we are just going to restore User Mode, then call the GEMDOS exit function on line 42.


In some ways Atari ST development is better than the development I do today for iOS.  In other ways its a bit clumsy.  A dark shadow falls over me when I think about all the hours of work spent by Atari ST engineers to build something almost lost to oblivion.  The silver lining is the amount of love that still exists by a select few fans of the Atari ST.  I can’t help but think about the similarities between software development and Sand Mandala.  Maybe I’ll get around to figuring out how to do animation next, we’ll see.


Function Call Benchmarks for Objective-C

Lets run some benchmarks on the different ways to call functions.  Today’s contestants are C Functions, C Function Pointers, C Blocks, C++ Virtual Methods, and Objective-C Methods.

The Code:

The System:

Mac OS X 10.9.3 : LLVM 3.4

Generated Assembly:

c function:

c function pointer:

c block:

c++ virtual method:

objc method:


The Results:

c function – useconds/call:0.004192
c function pointer – useconds/call:0.004705
c block – useconds/call:0.004320
c++ virtual method – useconds/call:0.004731
objc method – useconds/call:0.006549


Objective-C is slow because it is performing more code inside of objc_msgSend but the others are roughly performing the same number of operations.  The only differences are the number of reads from memory they have.


So, you decided to play around with quicklisp but when you try to load some packages you get a CFFI:LOAD-FOREIGN-LIBRARY-ERROR ?  CFFI is a Common Lisp library for interfacing with C libraries.  So if you install a quicklisp package like SDL2 it will attempt to load the libSDL2 shared library (libSDL2.dylib on Mac OS X) so that CFFI can forward calls to the shared library.  Usually when this error arises CFFI can’t find the library at all**.  What you need to do is just tell CFFI how to find it.  CFFI has a variable cffi:*foreign-library-directories* which is a list of directories to hunt for dynamic libraries.  So you can do something like this if you library is in “/opt/local/lib”:

You could even throw this code in your init file for you lisp as well so that it always gets executed (sbcl: ~/.sbclrc  ccl:~/.ccl-init.lisp)

** You also might want to check that your architecture is correct for your lisp as well (on Mac OS X “lipo -info <library file>”)

Remembering QBasic

Screen Shot 2014-09-20 at 2.03.48 PM

All programmers have a first language, me being a child of the early 90’s, my first programming language was Microsoft’s QBasic.  Let me set the stage a bit, this was the early 90’s, before most people would have access to some sort of network with their computer.  There were old books floating around in school libraries featuring games that could be typed into a BASIC interpreter.  Their heyday being more focused around the Apple II or Commodore 64 era computers, when BASIC was almost unavoidable.  You can find some of these books published online now, here is the series I remember: BASIC Computer Games by David H. Ahl.

By the time I found these books it was actually confusing how one would actually use them, since they focused on BASIC but by the early 90’s BASIC was hidden, or unavailable on computers.  Lucky for me Microsoft shipped QBasic with versions of Windows, so for kids with more spare time to poke around on computers, they would find it eventually.

QBasic was an environment that had many features that make it a nice environment for beginners:

  • Instant feedback on syntax errors – at each newline QBasic would check the last expression for syntax errors so that the programmer wouldn’t get far with an obvious error
  • Fast run-edit cycle – the BASIC was interpreted and was run by hitting the F5 button which would immediately jump into execution so a beginner could see quickly the results of his code.  I’d beg for that level of feedback in iOS development today.
  • Large body of published code – there were many dialects of BASIC so while it was unlikely that their basic worked without tweaking, the tweaking if students could persevere would reward with deeper understanding.
  • Free games – related to the last point, much of the code available were games.  Today kids have access to so many free games on mobile devices, it isn’t as much of a motivator.  But then, kids would do anything for free games, even deal with the messy business of coding.
  • Multiple graphics modes – there were graphic modes, some that allowed for easy graphics.  We liked to make programs that just drew pretty things.  They were easily accessible and integrated into the environment.  This is a big drawback for popular learning languages of today like Python.
  • Fullscreen – to avoid distractions
  • Good language documentation inside the editor

QBasic had some problems too.  For example it was kind of a pain to use functions.  To this day I’m not sure what the difference between a sub and a function is.  I could easily look it up now, but it didn’t occur to me as a kid and alot of BASIC instruction seemed to avoid the topic.  I remember there being some sort of dialog to add subroutines, I’m not sure if it was required or if you could do it alone with code.  I’m sure I could come up with more problems but they are moot because what it did, it did well enough to set me on my way to become a good programmer.  I’m not permanently damaged by QBasic’s shortcomings.  Sometimes we get caught up in these details about what languages are best for learning programming.  After considering it, the things that are most important are environments that are easy to explore and when I consider all the environments I know today, maybe no environment has done that better than QBasic.

If you want to try out QBasic its easy to find with an internet search.  Try looking for QBasic 1.1 by Microsoft that’s what I used.  I just ran mine in DOSBox on Mac OS X with freedos, seems to run perfectly.

Implementing Objective-C Message Calling in Nimrod With Macros

Okay, so our goal here is to add support for Objective-C from Nimrod.  We’d like to be able to allocate and use Objective-C objects.  So the way that Nimrod works, at least the parts we are interested in, is that first it does some lexer/parser magic then creates an Abstract Syntax Tree, then from that tree it generates C code, then it runs the C code through a C compiler to generate a binary.

There are a couple ways we could interface with Objective-C (objc).  We could just ignore objc syntax and interface directly with the C interface to the runtime.  We could be doing things like calling the C objc_msgSend function ourselves.  Or we could find a way to inject objc syntax into our generated C source files and instead compile with an objc compiler.  We’ll opt for the latter because in the future it would be neat to be able to define objc classes in Nimrod and it will be easier to debug if we are using the language directly instead of just tons of C function calls into the runtime.

The first thing we’ll want to do is change from compiling with a C compiler to an objc compiler.  Since objc is a super-set of C that should be easy to change and still be able to produce a valid binary.  Luckily the Nimrod compiler has a flag that allows us to do this easily, namely “nimrod objc <filename>” instead of “nimrod c <filename>”.  This makes the resulting code file be named with the “.m” suffix and be compiled with an objc compiler.

Okay, now we are exporting objc code, now lets try to inject some objc code into the resulting .m file.  Here again Nimrod provides us with a tool to do that with the emit pragma.  {.emit: “<code>”.} sends code verbatim to the resulting file.  The authors of Nimrod (Araq) put it in the langauge for exactly this purpose, interfacing with other C’ish languages.

One thing to note about Nimrod is that while it allows macros on the AST it doesn’t allow us to use reader macros like common-lisp does.  So we are stuck with the guidelines of the Nimrod language.  It will be impossible to make “[NSDictionary alloc]” make sense in Nimrod because it would think that is an array and would be looking for the comma between NSDictionary and alloc.  Also since the brackets already mean something we’ll need something special to distinguish objc message calls from arrays (eg [1,2,3,4,5]).  I decided to use the syntax “o[NSDictionary, alloc]” to mean “[NSDictionary alloc]”  Because of the prefix ‘o’ literal we can hunt out the difference between an array and an objc message send.

What I ended up doing was creating a Nimrod method that works on a statement, which is one or more expressions.  That way it can crawl its way down the AST and perform manipulations to convert arrays prefixed with ‘o’ into emit pragmas with the corresponding objc code.

Here is a link to the code as it stands now: callobjc

Notes on understanding the code:

  • Recursive expansion of objc message sends don’t exist yet.  Check out head it may be done by now.  That’s why you see messages alloc and init on separate lines instead of idiomatic objc on the same line.
  • {.compileTime.} pragma allows functions to be usable by macros
  • macros operate on PNimrodNode which can be interchangeable with stmt and expr types.  I don’t believe there are any static checks to verify that something is in fact a stmt vs expr.
  • the combo of getAST(<some template call>) is shortcut to avoid building an AST by hand
  • literals that prefix [] operators become children of the bracket operation.  I found this out by using the lispRepr proc which is indispensable when developing macros in Nimrod.

Intro to Nimrod

Nimrod is a not so popular programming language that I’ve been playing around with lately.  Instead of jumping into different things I’ve done with it I’ll start by giving a tiny introduction that explains why I care about Nimrod.  I’ve been using it for over a week now so I think I have enough of a grasp to have an opinion.

Nimrod first interested me because it compiles to C with little extra overhead, so it’s fast, but the killer feature you don’t see often is the ability to write hygienic macros.  You can manipulate the AST at compilation time.  Since one of my pet projects was writing C with s-exps to facilitate the usage of macros, this was of interest to me.

There are a few other languages that fill the same niche.  I’d say its competitors are C++, D, Go, and Rust.  I’ve played with them all except Rust, I don’t think any of them have the ability to write macros.

As I play with the language a bit more you can find some of my example projects on GitHub at nimrod-examples.  Also more can be found out about Nimrod at their website: Nimrod Lang.