Category Archives: Computing

Articles about computers and the IT industry.

How do I do “X” in Swift?

Maybe I have some duff feeds in my RSS reader. Maybe I have a few poor choices of people that I follow on Twitter. But I see links along these lines all the time:

How do you do something in Swift?

The answer is, almost always, exactly the same way you’d do it in Objective-C!

You want to do pull-to-refresh? Same.

You want to play with location services? Same.

You want to display one of the new UIAlertControllers? That’s the same, too.

Why? Because they’re all part of the underlying framework, the framework that’s there whichever language you’re using. That includes both Apple-languages — Swift and Objective-C — and everything else, C#, Python, Ruby.

That’s not to say that there is nothing useful to write about Swift. As a new language there are lots of things to write about, new ways of structuring your code, better ways of implementing algorithms. Tricks to avoid common errors or pitfalls. But interfacing with the OS? The syntax changes slightly but the code is pretty much the same.

Starting Coding

Graham Lee’s “Why is programming so hard?” made me think about how I started programming and whether I’d be coding now if I was twelve.

When I began, I had a Sinclair Spectrum. Back then, home computers booted into a programming language (BASIC typically) and to do anything you needed to know at least one command. To be fair, even then many people didn’t get beyond J, Symbol Shift-P, Symbol Shift-P (‘LOAD “”‘, the command used to load a program from tape).

My first memory of using a Spectrum is trying to type in a program from the manual. This was harder than it sounds as the Spectrum had a weird-and-wonderful command entry system in lieu of a lexer in the BASIC interpreter. I can’t even remember if I got it working, but I was hooked.

I still played games, but I occasionally looked at something and thought “how do they do that?” I’d then try to replicate it. Over time I’d spend more time coding and less gaming. I’d also read magazines and type in code from books and magazines. Then as now, working with other peoples code can be very instructive.

(But, to answer the question in the original blog, I can’t really remember the process of going from a few lines of trivial code to something more complex.)

I think three things — at least — were appealing at the time.

First, I had one. I guess that sounds obvious but even a few years earlier that wouldn’t have been a given. Prior to my interest in computers, I was fascinated by cars. I could identify all the models on sight, memorised the statistics and could tell you that a new Ford Fiesta had terrible handling. But I didn’t really know any of the basics about actually driving as it would be another six years before I could get a provisional licence and start taking lessons. The possibility of owning a car was even more remote (I’ve still never owned one!). There’s a dramatic difference between reading about something and actually using it.

Secondly, it was possible to pretty much understand the whole machine, from the hardware right up to the BASIC interpreter. I didn’t, of course, but nothing seemed out of reach. I started in BASIC but I also played with Z80 assembler. The Spectrum was a simple enough machine that working at that level exposed a lot of what was happening with the hardware — interrupts, memory mapped hardware — even without breaking out a soldering iron.

Thirdly, even the state of the art seemed achievable. I was writing programs to fade text in from black to white and animating crude figures around the screen. What was Jet Set Willy but more of the same? (That’s a rhetorical question. In practice the difference was pretty substantial, but it was still one guy sitting alone at home.)

There was a learning curve but I thought that everything was understandable and figured that I could do even the hard stuff. This really spurred me on, made me try things and play around. There is no better way of learning; nothing kills my enthusiasm more than thinking I can’t solve a problem.

Fast forward to today and, while most people own a computer, the machines are not fully understandable and the state of the art seems distant and impossible for a mere mortal to achieve.

Look at a game that appears as simple as Monument Valley. It doesn’t take long to realise that it’s not that simple. Where would you even start? What would you need to understand just to put a few of your own pixels on screen?

I fear that I would get overwhelmed by the amount that I didn’t know — and possibly would never know — before I got interested enough to keep going.

So where is the “in” to programming these days?

It’s easy to get hung up on the programming language or environment, but I don’t think that’s the key. If it’s interesting people will put in the time to get past those kinds of obstacles. (Whether they should need to or not is another question.) Instead, we need projects that can start small and gradually get more complex, and they have to be in environments that the user understands, and the results need to look immediately impressive or be immediately useful.

This is why I don’t think learning to code in, say, Python is likely to be compelling for many people. It is programming, but few people use command lines any more and the results don’t look impressive. Printing the numbers from one to ten? Yawn.

Web programming is, perhaps, the obvious example. People start copy-and-pasting can graduate to writing code. Please don’t start the BASIC versus JavaScript debate in the comments.

I have heard about (but have no experience of) building “mods” for Minecraft. Much as I did, people play the game and get hooked. Rather than trying to replicate features, they try to modify them or enhance them.

Once people have left academia, the most popular method of starting programming that I’ve seen has been Excel. People start building complex spreadsheets, then add a few macros and then begin playing around with VBA. At each stage they can do a little more than they could before and at each stage it’s useful.

So, back to the original question: would I be programming now if I was twelve? I think the most likely route would be through something like Minecraft, but I never got hooked on anything more complicated than Bomb Jack so I have to think the answer would be no. For my own sanity I probably should think too hard whether or not that’s a good thing.

Recruitment Tests

Over the years I’ve been asked to do a lot of programming aptitude tests. I’ve had to do some in the last couple of months and I’m deliberately writing this now before I get the results back of the most recent one so you won’t think that this post is just sour grapes…

I’m not going to get into the details of the tests because it doesn’t really matter what they are or who administered them for the purposes of this post.

I don’t like these tests. I don’t think they work well for either the candidate or even the company that is using them.

An obvious complaint is that the tests bare little resemblance to Real Life. On Twitter I cynically suggested that a more realistic test would involve trying it write a program based on an ambiguous and constantly changing specification. The test would end when you quit in frustration.

A bigger issue, I think, is the amount of time it takes and when they take place.

Let’s tackle the latter point first. Every time I’ve been asked to take a test it has been before I’ve spoken to anyone. All they’ve seen of me is my CV/resume. All I know about the company and job is what I see on their website and a brief job description. Am I a nice person? What is it like to work there? Neither party has any idea.

My objection to this is that we both have a lot to lose if the recruitment process goes wrong. I always consider an interview to be a two way process — they need to learn about me and I need to understand more about them — yet the very first stage in the recruitment process is them demanding a couple of hours or a days commitment of my time but only a couple of minutes of their time.

To be clear, I have sat on the other side of the table. I do know that far too many candidates have no real hope of filling the position. But, equally, you don’t want to push away the most qualified candidates.

Two hours is a good a chunk of time. A day is a lot of time. If I already have a full time job and am just looking to move to something better how are those requests going to fit into my schedule? Badly I’d hazard. A full day is half my weekend. Two hours is an evening. (At least! Finding two contiguous hours at home these days is a challenge.)

Sure, the least qualified candidates will fail it but the very best candidates probably won’t even take it. They’ll just say “no” and move on to an opportunity that requires less upfront effort.

I came across this quote earlier today: “Never allow someone to be your priority while allowing yourself to be their option” (either Mark Twain or Maya Angelou, not sure who said it first). That seems very appropriate here. Should I really prioritise these companies over others? Are these companies that special? (A possible exception would be really big names like Google or Apple but I think it’s fair to say that none of the companies that have asked me to do tests have been in that class.)

So, what’s the answer? Certainly there’s no silver bullet that pleases everyone and finds only the very best matches. Having thought about this I think the best compromise would be asking candidates to do something as basic as FizzBuzz (but probably not exactly that as it would be very easy to Google the answer).

To people who have never done any recruitment this probably sounds incredibly patronising. All I can say is that I wish that were true.

And if you really want to administer a test that takes more than thirty minutes, I think they’d be more acceptable much later in the recruitment process. Or failing that, offering to pay isn’t a ridiculous suggestion, though I suspect most employers would argue otherwise.

Afterword: After writing all this, why am I even taking the tests? Mostly because I’m currently between jobs so it’s difficult to argue that I don’t have the time.

After-afterword: I passed the test. I still don’t like them.

Swift Types

If you look at the Swift Language guide, you get the distinct impression that the type system is sleek and modern. However the more you dig into it the more eccentricities you find.

The one I’m going to look at today makes sense only if you look at the problem domain from a slightly skewed perspective. I’ve been trying to think whether this is a sensible, pragmatic way of designing a language or a mistake. Judge for yourself.

So, the feature. Let’s define a dictionary:

var test1 = [ "Foo" : "Bar" ]

Check the type and we find that it’s of type Dictionary<String,String>. The generics and type inference are doing exactly what you’d image.

test1["Test"] = "Works"

So basically it’s all good.

So, what type is this expression?

var test2 = [:]

And why does this not work?

test2["Test"] = "Doesn't work"

Let’s take a step back. What’s the problem? Well, [:] is an empty dictionary but give us no clue what the type is. Remember, Swift dictionaries and arrays use generics, so the compiler only allows objects of a particular type to be added.

A good guess for the type would be Dictionary<AnyObject,AnyObject>. But a little fishing around tells you that’s not the case because AnyObject is neither “Hashable” or “Equatable” and keys need to be both.

The answer? test2 is an NSDictionary. That is, in this one circumstance, Swift extends outside its native dictionary type and decides to use a class found in Foundation.

Once you know that, it is clear that the second line should be:

test2.setValue("Does work now", forKey:"Test")

Maybe if you’re familiar with the guts of both Objective C and Swift this behaviour makes sense, but a language built-in returning a completely different type just because it can’t figure out the type feels broken to me.

In the end I think I’ve convinced myself that, while it might be convenient to allow this syntax, it’s a bad idea to saddle the language with these semantics so early on. In a few years when no one uses Objective C or when Swift is no longer fully tied to Cocoa, will this make sense?

I would prefer to see it being a compiler error with the correct approach being explicit with the type:

var test2:Dictionary<String,String> = [:]

Thoughts?

Swift Hate

I’m seeing a surprising amount of vitriol aimed at Swift, Apple’s new programming language for iOS and Mac development. I understand that there can be reasoned debate around the features (or lack thereof), syntax and even the necessity of it but there can be little doubt about the outcome: if you want to consider yourself an iOS developer, it’s a language that you will need to learn.

The only variable I can think of is when you learn it.

I think it’s perfectly reasonable to delay learning it as you have code to deliver now and because Swift is, well, very beta currently.

But asking what the point of Swift is not constructive. Asking what problems can be solved exclusively by Swift makes no sense at all you can do almost anything in most programming languages. Just because Intercal is Turing complete doesn’t mean that you’d want to use it for any real work. What varies between languages is what’s easy and what’s hard.

Objective-C undoubtedly makes some things easier than Swift. It’s a more dynamic language and its C foundations open up a lot of low-level optimisations that probably are not there in higher level languages.

But that same flexibility comes with a price: segmentation faults and memory leaks; pointers; easy-to-get-wrong switch statement; a lack of bounds checking. It also inherits a lot of ambiguity from original C language specification which makes certain automatic optimisations impossible.

How many applications require low-level optimisations more than safety? (And that’s ignoring that the biggest optimisations are usually found in designing a better algorithm or architecture.) How often is it better to have five lines of code instead of one? Every line is a liability, something that can go wrong, something that needs to be understood, tested and maintained.

Whatever its failings, it’s already clear that Swift is more concise and safer than Objective C.

There are absolutely applications where Objective C, C or C++ would be a better choice than Swift. It’s the old 80-20 rule applied to a programming language. And, for those resistant to learning a new language, the 80% is not on “your” side.

Right now, some of this requires a bit of a leap of faith. Swift clearly isn’t finished. You can either learn it now and potentially have some say on what the “final” version looks like, or you can learn it afterwards and just have to accept what’s there. But, either way, you’ll probably using it next year. Get used to it.

Learning Swift

Swift is a new programming language designed by Apple for development on OS X and iOS. I thought that I should try to learn it a little so I decided to convert a non-trivial collection of classes from one of my apps (www.cut) into Swift. I always find it better to work on a real project rather than just to play around with things aimlessly. Also, by re-working an old project, I knew that all the problems I would find would be language related rather than anything to do with the architecture.

The classes are also data related rather than being UI, so it is mostly a test of the language itself rather than how it interfaces with Objective C.

First impressions are good. Swift is mostly nice and consistent, which although sounding like damning with faint praise, is actually a compliment. I read a little of the language guide and dove straight in. A lot of the attempts to get the following right were me just typing stuff, guessing the syntax rather than looking it up.

Quite by accident I think my code sample inadvertently shows an area of strength for Objective-C and weakness for Swift.

The idea of the code is that it reads a Plist, instantiates a class based on that configuration and fills in a number of properties.

The first half of the code looks like this:

        NSError* error = nil;
        NSString *plistPath = [[NSBundle mainBundle] pathForResource:@"XXX" ofType:@"plist"];
        NSData *plistXML = [[NSFileManager defaultManager] contentsAtPath:plistPath];
        NSDictionary *temp = (NSDictionary *)[NSPropertyListSerialization propertyListWithData:plistXML
                                                                                       options:NSPropertyListMutableContainersAndLeaves
                                                                                        format:NULL
                                                                                         error:&error];
        NSArray* values = [temp objectForKey:@"Entries"];

This was pretty straight forward to convert into Swift, though the type system gave me issues:

    let plistPath =  NSBundle.mainBundle().pathForResource("XXX", ofType: "plist")
    let plistXML = NSFileManager.defaultManager().contentsAtPath(plistPath)
    var error:NSError? = nil
    var format:CMutablePointer<NSPropertyListFormat>? = nil
    let immutable:NSPropertyListReadOptions = 0
    var pList = NSPropertyListSerialization.propertyListWithData(plistXML,
        options:NSPropertyListReadOptions(NSPropertyListMutabilityOptions.Immutable.toRaw()),
        format:format!,
        error: &error) as NSDictionary

Getting the options property seems very clumsy; I’m sure that there must be a better way of doing it.

I had a real problem with the in/out parameters format and error. Not only was the documentation confusing but the Swift Playground kept crashing making it difficult to distinguish between what I was doing wrong and where the compiler itself was messing up. It’s also a bit odd that, though both are in/out parameters, that they both need different methods to extract the values.

(To be fair, this is a beta and it is the first version of a whole language and compiler. I mention the crashes not because they’re unexpected or even especially bad, just as an honest description of the difficulty I had.)

The next section, using the data in the plist, was much more problematic. The code looks like this, but I’ve trimmed a lot so what we have here isn’t terribly useful now!

        proxy = nil;
        for (NSDictionary* i in values) {
            if ([thing isEqualToString:[i objectForKey:@"Class"]]) {
                dynamicClass = NSClassFromString([i objectForKey:@"BaseClass"]);
                proxy = [[dynamicClass alloc] init];
            }
        }

I didn’t do the straight conversion. A few more years of Cocoa programming allowed me to notice an optimisation:

    let valueList = values.filteredArrayUsingPredicate(NSPredicate(format: "Class = %@", value))
    let valueData = valueList[0] as NSDictionary

This same approach would work in Objective-C.

Next I tried:

  let dynamicClass = NSClassFromString(valueData["BaseClass"])
  let proxy = dynamicClass()

The first line works as expected. The second line doesn’t compile.

Is there anything else that we can do with dynamicClass? Let’s see. It’s an AnyClass which is a type alias for AnyObject.Type. Which doesn’t really help.

I tried casting it to a base class but no matter what I tried I couldn’t alloc/init it (in Objective C terms).

Josh Smith figured out how to do it by creating a factory class in Objective-C.

I tried (and failed) to get it to work by calling some of the Objective-C runtime directly:

var bytes:Byte[] = [0,0,0,0]
let b = objc_constructInstance(a, &bytes)

But the second line doesn’t work when using ARC. (To be fair, Xcode struck through the definition so I didn’t have much confidence that it would work!)

So that leaves Josh’s call out to Objective-C to be the best method that I’m aware of.

In the end I just used a switch statement to select between the relatively limited number of options that I had to choose between. Not as clever, but maybe that’s a good thing?