All posts by Stephen Darlington

ShareEverywhere

ShareEverwhere main screen
ShareEverwhere main screen

I was so busy when it came out that I never quite got around to blogging about it here: I have a new app out! It’s called ShareEverywhere. It is built exclusively for iOS 8 and uses the new, built-in “share” functionality, allowing you to share to a good number of services from any app that uses the standard share button.

When I first wrote it, I wasn’t sure how many, if any, developers would build share widgets into their apps. Now that we know the answer is “a lot of them,” I still use ShareEverywhere because it beats having a dozen widgets hiding in your action menu. And there are still services, like Pinboard.in, that don’t have their own native apps.

It’s available now in the App Store for your iPhone or iPad. It costs £1.49, $1.99, €1.79 or your local equivalent.

How do I do “X” in Swift?

Maybe I have some duff feeds in my RSS reader. Maybe I have a few poor choices of people that I follow on Twitter. But I see links along these lines all the time:

How do you do something in Swift?

The answer is, almost always, exactly the same way you’d do it in Objective-C!

You want to do pull-to-refresh? Same.

You want to play with location services? Same.

You want to display one of the new UIAlertControllers? That’s the same, too.

Why? Because they’re all part of the underlying framework, the framework that’s there whichever language you’re using. That includes both Apple-languages — Swift and Objective-C — and everything else, C#, Python, Ruby.

That’s not to say that there is nothing useful to write about Swift. As a new language there are lots of things to write about, new ways of structuring your code, better ways of implementing algorithms. Tricks to avoid common errors or pitfalls. But interfacing with the OS? The syntax changes slightly but the code is pretty much the same.

Starting Coding

Graham Lee’s “Why is programming so hard?” made me think about how I started programming and whether I’d be coding now if I was twelve.

When I began, I had a Sinclair Spectrum. Back then, home computers booted into a programming language (BASIC typically) and to do anything you needed to know at least one command. To be fair, even then many people didn’t get beyond J, Symbol Shift-P, Symbol Shift-P (‘LOAD “”‘, the command used to load a program from tape).

My first memory of using a Spectrum is trying to type in a program from the manual. This was harder than it sounds as the Spectrum had a weird-and-wonderful command entry system in lieu of a lexer in the BASIC interpreter. I can’t even remember if I got it working, but I was hooked.

I still played games, but I occasionally looked at something and thought “how do they do that?” I’d then try to replicate it. Over time I’d spend more time coding and less gaming. I’d also read magazines and type in code from books and magazines. Then as now, working with other peoples code can be very instructive.

(But, to answer the question in the original blog, I can’t really remember the process of going from a few lines of trivial code to something more complex.)

I think three things — at least — were appealing at the time.

First, I had one. I guess that sounds obvious but even a few years earlier that wouldn’t have been a given. Prior to my interest in computers, I was fascinated by cars. I could identify all the models on sight, memorised the statistics and could tell you that a new Ford Fiesta had terrible handling. But I didn’t really know any of the basics about actually driving as it would be another six years before I could get a provisional licence and start taking lessons. The possibility of owning a car was even more remote (I’ve still never owned one!). There’s a dramatic difference between reading about something and actually using it.

Secondly, it was possible to pretty much understand the whole machine, from the hardware right up to the BASIC interpreter. I didn’t, of course, but nothing seemed out of reach. I started in BASIC but I also played with Z80 assembler. The Spectrum was a simple enough machine that working at that level exposed a lot of what was happening with the hardware — interrupts, memory mapped hardware — even without breaking out a soldering iron.

Thirdly, even the state of the art seemed achievable. I was writing programs to fade text in from black to white and animating crude figures around the screen. What was Jet Set Willy but more of the same? (That’s a rhetorical question. In practice the difference was pretty substantial, but it was still one guy sitting alone at home.)

There was a learning curve but I thought that everything was understandable and figured that I could do even the hard stuff. This really spurred me on, made me try things and play around. There is no better way of learning; nothing kills my enthusiasm more than thinking I can’t solve a problem.

Fast forward to today and, while most people own a computer, the machines are not fully understandable and the state of the art seems distant and impossible for a mere mortal to achieve.

Look at a game that appears as simple as Monument Valley. It doesn’t take long to realise that it’s not that simple. Where would you even start? What would you need to understand just to put a few of your own pixels on screen?

I fear that I would get overwhelmed by the amount that I didn’t know — and possibly would never know — before I got interested enough to keep going.

So where is the “in” to programming these days?

It’s easy to get hung up on the programming language or environment, but I don’t think that’s the key. If it’s interesting people will put in the time to get past those kinds of obstacles. (Whether they should need to or not is another question.) Instead, we need projects that can start small and gradually get more complex, and they have to be in environments that the user understands, and the results need to look immediately impressive or be immediately useful.

This is why I don’t think learning to code in, say, Python is likely to be compelling for many people. It is programming, but few people use command lines any more and the results don’t look impressive. Printing the numbers from one to ten? Yawn.

Web programming is, perhaps, the obvious example. People start copy-and-pasting can graduate to writing code. Please don’t start the BASIC versus JavaScript debate in the comments.

I have heard about (but have no experience of) building “mods” for Minecraft. Much as I did, people play the game and get hooked. Rather than trying to replicate features, they try to modify them or enhance them.

Once people have left academia, the most popular method of starting programming that I’ve seen has been Excel. People start building complex spreadsheets, then add a few macros and then begin playing around with VBA. At each stage they can do a little more than they could before and at each stage it’s useful.

So, back to the original question: would I be programming now if I was twelve? I think the most likely route would be through something like Minecraft, but I never got hooked on anything more complicated than Bomb Jack so I have to think the answer would be no. For my own sanity I probably should think too hard whether or not that’s a good thing.