I never know whether to put blogs about Apple stuff here or on my company website. This time I wrote a few thoughts about the best bits announced at last weeks WWDC keynote (mostly stuff arriving in the Autumn) and put them over on the Wandle Software blog.
It’s easy to forget how much computers have changed over a relatively short. time. A book I found in my old room at my parent house, “Simpleton Explores Microcomputers,” helped me get some perspective.
I don’t know exactly when it’s from, but it’s certainly early eighties. Possibly 1983 or 1984.
It explores the computers that are available at the time and what it’s like to own one. One of the most telling aspects is that it’s written for people who have never owned, possibly used, a computer.
It starts with a little history (that’s still, by and large, relevant) and moves on to talk about some of the jargon users will come across and the problems they might be subjected to.
Hard drives are rare beasts that come in 5, 10 or 20Mb capacities . There are even these things called “floppy diskettes,” though most users will be saving their data on cassette tapes. (Before MP3s there were CDs. Before CDs there were cassette tapes. Ask your dad.)
There are four pages dedicated to each computer. I love the detail here, where they explain what can be plugged in (such as the cassette unit) and what the CAPS LOCK key does. It talks about the ports and the power switch. But absolutely nothing about the software!
The final section discusses the relative merits of dot-matrix and daisy wheel printers. I still remember by own 9-pin dot-matrix printer. It was slow and noisy. I think I’ll stick with my ink jet, thank you.
It all starts out with a detailed plan. Then someone says, “Can we deliver by October?” A few features get cut, some of the estimates get revised downwards and everyone gets back to work. Then you get to the end of September and find that someone removed all the contingency and that in the rush to finish the requirements a heap of important stuff got missed.
You spend days, weeks, quite possibly months pushing to get the software developed and, a few months before the real end, a crunch point is reached. It’s missing what’s now realised to be critical functionality; it takes an hour to process something that should be instant; the data doesn’t look like you thought it would; it’s too late for any real benefit to be obtained.
The whole project gets cancelled. Or at the very least, suffers from a near-death experience.
The technology, they say, wasn’t right. It was immature. Or badly supported by the vendor. It was open source. Or not open source. Word quickly gets around your industry that the whole project failed because some new software didn’t work.
I think every project that I’ve been on that has been significantly delayed — that is, most of them — has followed a similar arc. And, in each and every case, the diagnosis of the failure has been the same: the technology. And in pretty much every case it wasn’t really true.
The neat thing is that no individual is to blame and, even better, it’s completely impossible to prove.
Let’s look at the timeline above. How different would this have been had another technology been chosen? Not very I’d wager.
Undoubtedly the technology had problems. It always does. It fell over when you pushed it some unusual way. It leaked memory. It was too slow. It doesn’t really matter whether you’re using an exciting new technology from an unknown vendor or a widely used “industry standard,” if you’re doing anything vaguely interesting you will come across the unexpected. But given time, almost all these problems are tractable.
Unfortunately, it’s time that was lacking. Testing, contingency, everything deemed non-essential was sacrificed in order to make an externally defined ship date.
The thing to remember about a software development project is that the only deliverable that matters to end-users is the software. When users come to look at the near-finished product and it doesn’t meet their needs, they blame the software and the development team.
The development team often end up blaming the new development tools as well because, well, the alternative is saying that they screwed up and who is going to make a career limiting mistake like that?
The truth, however, is often revisionist. It’s altered by people who either have an idealised view of how the project should have been run rather than how it really went or by people who focus on the wrong parts of the whole.
They don’t remember or weren’t involved in the discussions that preceded the development work. They don’t look back at the project plan or the design documents or even the reams of requirements that they probably signed off months ago.
All the problems sure look like technology problems. Missing functionality. Low quality. Poor performance. But are they ultimately caused by poor technology?
Nearing the end of the project it is easy to forget all the work that happened at the start. It’s also easy to forget that the preparatory work was late and incomplete.
Project plans and design documents and test strategies are all important. It would be a mistake to try to run a large project without some form of any of them, but they’re either not visible to most end users or a transitory work that ends up filed away and rarely looked at once the software is functional.
As ever, the real problem is the people. Politics. Pressure. Poor communication. Technology problems are almost always easier to debug than the people involved.
Last week, GitHub was the victim of a days long denial of service attack that meant that it wasn’t available reliably despite the hard work of their team.
What surprised me was the number of people on Twitter complaining that they couldn’t work. I was surprised because one of the key things about Git (the tool rather than the website) is that it is distributed, that is there is no centralised server in the way that there is with, say, Subversion. Clearly there are things you can’t do without GitHub – most obviously downloading new repositories – but almost everything else can be done peer-to-peer. The rest of this post explains how.
First, a couple of disclaimers. I’m not a Git expert and I’m not a server admin. There certainly are other ways of doing this. Some may even be more secure or easier in your environment. What I like about this approach is that it works on pretty much any machine that has git installed.
Step one: set up a Git server.
“Woah, now,” you’re thinking, “I don’t have all day!”
Turns out it’s just two commands:
touch > .git/git-daemon-export-ok git daemon
(Assuming you’re in the folder that’s at the root of the repository you want to export.)
Then on your other PC you can enter the following command to pull or push:
git pull git://othermachine/Users/stephend/temp/
You could even clone a repo to a local server, run a git server on it and use it as a temporary ’central’ repository.
If there’s a downside it’s that it’s difficult to recommend pushing to a repository using this method because there’s no authentication. However, as a quick and dirty way of pulling commits between any two machines with git installed it’s pretty reliable.
(If you know git you’re probably thinking “Why not ssh?” The short version is that it’s often locked down in corporate machines and is usually not available on Windows machines. But if you can use it you probably should.)
“Traditionally” I would have had to remove version one from sale and offer a completely new app, which would have meant that existing users would have to pay again to get the same functionality. Or I’d have to support two apps. Or I’d keep the same app in the store and all existing users would get downgraded to the free version. None of these solutions seemed fair to existing users.
What I wanted was for people who had bought version one to get the full, unlocked version and for new users to be promoted for the paid upgrade.
Since iOS 7 came out in 2013 that it entirely possible. I’ll explain how it’s done here. This isn’t just some theoretical “I’ve seen the documentation” claim – I’ve done it with one of my own apps, Rootn Tootn.
The really short answer: take a look at the session 308 video from WWDC 2013. That’s the only information from Apple that explains how to do it. They have documented the API calls that are required but the actual process is left as an exercise for the interested student. And there are quite a few steps if you want to do it properly.
Firstly you need to get the app receipt. Before iOS 7 this only made sense for IAP but now they are available for all purchases and come in the same format as receipts from the Mac App Store.
Receipts have a number of useful features. In the past they have been used to validate purchase, and they can still be used for this. What’s interesting with the new receipts is that they include both the original purchase and the version number of that original purchase. This means that we can decide whether a user gets the paid functionally by looking for either an in app purchase or a purchase date before a particular time or, more likely, before a particular version.
When you download an app you should get a receipt automatically but you can also use the
SKReceiptRefreshRequest class to force one to be generated. (This is also useful during development where, obviously, there is no receipt.)
Once the refresh has completed, you use
[NSBundle appStoreReceiptURL:] to access the receipt.
Once you have the receipt the bad news starts.
It’s not in a user friendly format. And Apple do not provide any APIs to read it. Check out Apple’s documentation:
The outermost portion (labeled Receipt in the figure) is a PKCS #7 container, as defined by RFC 2315, with its payload encoded using ASN.1 (Abstract Syntax Notation One), as defined by ITU-T X.690. The payload is composed of a set of receipt attributes. Each receipt attribute contains a type, a version, and a value.
If security is important to you, you should probably write your own code to do this. ASN.1 is a standard format and it’s not that hard.
There are apps that generate the validation code, such as
There are also Open Source projects that do the same thing. I’ve used RMStore; there’s also VerifyStoreReceiptiOS. The main disadvantage of these is that, as standard, open code it makes it easier for crackers to reverse engineer how you remember that a purchase has been made.
And there you have it. It is possible. It’s just a lot harder than you might imagine. Remember this when someone tells you that it can’t be done.
“Preview” is damaged and can’t be opened. You should move it to the Trash.
This was the rather surprising error message that I’ve been getting when I try to open a PDF from the Finder since I upgraded to OS X Yosemite. It’s bad enough when you get an error message, but one suggesting that you delete a frequently used app is inconvenient to say the least!
Since I found it, I’ve been using a workaround: start Preview.app and open the file manually.
But there’s a better answer. Apparently it just has a duff “quarantine” attribute set. To get rid of it and get things back to normal simply do the following in a Terminal window:
/Users/stephend $ cd /Applications /Applications $ xattr -l Preview.app com.apple.quarantine: 0006;53563f06;Acorn; /Applications $
If you see the same thing, proceed to the next step:
/Applications $ sudo xattr -d com.apple.quarantine Preview.app Password:****
And that’s all. Now opening a PDF from the Finder should work. It’s bizarre.
I raised Radar 19384558 with Apple. Please dupe if you see the same thing. Thanks to Gus Mueller (developer of Acorn) for the support.
Update: It seems that a couple of other apps also have the same, incorrect attributes:
/Applications $ xattr -l *.app Utilities/*.app | grep Acorn Mail.app: com.apple.quarantine: 0006;53563f06;Acorn; iPhoto.app: com.apple.quarantine: 0006;53563f06;Acorn;
I have also removed those tags, though I’ve not seen any problems with them so far.