Tag Archives: technology

Is MySpace really the future of email?

Am I getting old? Perhaps. I’ve been using email since 1992 when I first went to university so I just find it second nature now. It’s got to the point where I organise my whole life using it and I get quite frustrated when I actually have to call someone to get something done that could more easily be done asynchronously ((That’s to say, when I send an email you don’t have to be there to answer it. Unlike a phone call or an instant message where you do.)). But that’s not how many people think according to ZDNet.co.uk.

The gist is that many people are now using websites such as Facebook and MySpace instead of email. In fact, they claim, teenagers only use email to talk to adults.

Is this the way of the future? Is it only old-age and inertia that’s stopping people like me from using MySpace exclusively?

I don’t think so. It’s not that I’m a Luddite. I do use instant messenger and I use my mobile phone more for text messaging than for voice calls, but there are a few issues that we need to work through first.

The first and most obvious is that of convenience. With email I can use one program (or check one website) to see all the messages that I am interested in reading. With FaceBook I have to check there, and then again on MySpace for my messages there and, finally, still my email just in case someone has mailed me directly or I have a notification from sites that don’t have internal messaging. That’s just a pain! History tells us that these closed systems do not last. Let’s have a look at a couple of examples.

Let’s look at email and how it evolved. In fact, it seems to have evolved in the same direction twice, first as technology allowed and second due to commercial “lock-in.” It started out as a way to communicate between users on a single machine. This doesn’t make much sense if you’re thinking about personal computers, but in the sixties and early seventies the concept of having your own machine just wasn’t a reality. As machines started to be linked together, so did the email systems. This wasn’t always easy as the different operating systems often had their own “standards” and some, such as Unix, often came with several incompatible implementations. After local networks were installed, people starting thinking globally and started plugging their networks together, creating the Internet ((Okay, so I edited out a few details. I’m trying to show the general trajectory rather than every last twist in the story.)).

Many of the PC vendors that had not been involved in earlier eras and the bulletin boards that catered for them ((I’m including systems like AOL and Compuserve here.)) went straight for the second tier, a proprietary system barely capable of talking to the outside world.? There were a variety of reasons for this. It was by design — not wanting people to exchange messages without buying their software — or laziness but either way the result was the same. To a certain extent that’s where we are still in the Microsoft world. Exchange will talk to the rest of the SMTP world, albeit reluctantly and, even then, it’s not one hundred percent standards compliant ((Ever wondered what the winmail.dat files are when you open a message in an application other than Outlook?)). Meanwhile, the rest of the world, even companies famous for shunning technologies Not Invented There, are using industry standards to communicate.

And if we then step forward to the last decade and the progress of instant messenger software we see the same thing in the process of happening. We start with completely separate islands, where I can talk to other people on, say, AIM but friends on MSN are off limits. I either have to push my “buddies” onto the same network or use applications like Adium so I can connect to multiple networks from the same software. And then a couple of years ago we saw the first signs of interoperability, with a pact between Yahoo and Microsoft. And, increasingly, we see the uptake of open standards like Jabber which is used as the foundation for Google Talk.

So, in the case of both IM and email we started with competing, incompatible technologies that eventually merged into one unified, interoperable version. Is that going to happen with FaceBook and MySpace? I’m not so sure. After all, we already have “messaging” applications outside these social networking sites. I see both as more of a layer on top of traditional email services, acting as an intermediary when communication is first initiated.

I’m not anti-social networking (I am a member on LinkedIn) but I am keen than we don’t take a step back into the “dark days” of the Internet when we had AOL and MSN competing to keep their users separate from the outside world. Walled gardens are not what the Internet is all about; this kind of system only benefits the companies that own the various properties. Let MySpace do the social bit, introducing people, but let the experts, the proven IM and email systems, keep the communication going.

Double Standards?

Microsoft have been getting lots of press recently because of their new Zune music player. One of its major features is its wireless interface that lets you share music; even most of the advertising talks about the social implications ((It amuses me that with all the money that Microsoft has, the best their marketing people can come up with to describe this is “squirting.” At best that sounds comic, at worst somewhat rude. What were they thinking?)). But let’s have a quick look at that functionality in more detail.

If I decide that I want to expend an hour of battery life in order to see other Zunes in the area, what can I do? Most famously you can transfer songs. As I’m sure you’ve heard by now, there are limits. When I receive a song, I can play it three times or hang onto it for three days ((Even this, it turns out, is a simplification. At least one of the major record labels has forbidden wireless sharing of their music entirely. Unfortunately they don’t tell you about this until you actually try to transfer the file yourself. Is this legal? Is it not a case of adding restrictions after the sale?)) but after that all I get is an electronic post-it note reminding me about it. Clearly a lot of thought and a lot of engineering effort has gone into these limitations.

What about movies? Sorry, bad news here. You can’t transmit them at all.

Zune can also store pictures. What limits have Microsoft provided to protect photographers?

The answer, it turns out, is none. You can transfer as many pictures as you like to as many people as you like. Once transferred, they are visible indefinitely and can even be copied to further Zunes.

Er, hello? Double standards?

I imagine that the main argument is that most people don’t have a bunch of professional photographs on their computers but do have commercial music. How far can we get with that line of thinking? Well, in fact, there is a certain logic in that. Most people don’t write their own music, even with relatively simple to use applications like Garageband, but they do have large collections of holiday snaps.

However the argument starts to fall down when you start to think about movies. Do people have only commercial movies and nothing personal? I don’t think so. While it is possible to rip a DVD and put it on your iPod it’s legally dubious, non-trivial (because of the CSS scrambling scheme) and time consuming (transcoding to MP4 takes a long time even on quick machines). Even if you use P2P software to download an illegal copy it’s likely to be is DivX format which cannot be used directly by the Zune, so that time-consuming transcoding step returns. My guess is that people are, in fact, much more likely to have home movies. Of course, if you made the movie you’ll also own the copyright for and are quite likely to want to send to friends and family. Certainly my wedding video has done the rounds and my attempts on a Segway has been distributed fairly widely.

That being the case, then why are the limitations on distributing movies even more severe than that for music? There’s a definite mismatch between desired usage patterns and the programmed restrictions.

So where have the restrictions come from and why do they vary so widely? Maybe a clue can be found in the fact that Microsoft are paying the RIAA $1 for each Zune sold.

Why would Microsoft do that? Clearly, in the US, the RIAA, for music, and the MPAA, for movies, hold a lot of sway. But for photographers? I’m not aware of a single organisation that has the same level of influence.

I’m sure Reuters and PA protect the copyright of their own images, but who protects everyone else? Perhaps this is because while movies and music require large teams, photography is more often a solo activity but certainly it has no relation to the value of the medium.

Ultimately I think this is another strike against the draconian DRM measures that are currently being applied to movies and music. I have nothing against digital rights management in the abstract, but implementations that restrict or remove rights that you already have by law just make the music labels and movie distributors look like money-grabbing opportunists.

When is a pencil and paper better than a computer?

In this article in MacUser Howard Oakley notes that a number of schools have recently banned the use of wireless networks due to the unknown effects of the radio waves used. He then connects this with the declining number of people taking science subjects at those same schools and their ability to understand the likely risks of said networks.

It’s an interesting piece, but what I find interesting is that as the general populations understanding of how the world works dwindles, so our reliance on high technology increases ((As this article asks, in relation to decreasing interesting in science degrees, “do they just totally not care about where things like web search and MP3 codecs and 3D graphics and peer-to-peer protocols come from”?)).

One incredible thing is that sometimes we start moving to a highly technical solution despite there being little advantage in it. Or at least as far as I can see, the advantage is that it is digital and new.

My favourite example is that of electronic voting machines. It’s easy to point and laugh at all the problems that they’ve been causing, particularly in the recent elections in the US. But despite the problems, despite every indication that they often choose who wins an election rather than the electorate, there is still a drive to increase their use.

The main thing that I want to know about the voting machines is this. What problem are they solving? What part of the old system was so broken that it required a complicated, flawed and unreliable new system? ((It’s also worth noting that before the new electronic machines, the US had problems. Remember the “hanging chads” problem with Bush’s first election? That was a flaw in a method of automatically counting votes.))

Some say that the current system is inefficient or labour intensive or slow. Unfortunately, as far as I can see, that’s either untrue or by design. The system needs to be both anonymous and yet track fraud, two ends of the privacy spectrum. The model in use is similar to public key cryptography in that working out who made a particular vote is not impossible. In fact, in principle, it’s very easy. But unless you have plenty of time to manually check thousands of ballot papers it’s going to take a while. This is by design.

Similarly, the effort required to count the votes in the first place is also a benefit. It makes it more difficult for any individual to have a dramatic effect on the final outcome. This is a good thing.

And slow? It’s simple to make quicker: throw more people at it. That makes it quicker and reduces any inherent bias.

I love new technology and gadgets, I’m fascinated by how they work and the effect that some of them have on society. But in the end, you have to use the right tool for the job. And the right tool does not always have an embedded computer.

The Promise, The Limits, And The Beauty Of Software

Grady BoochThis evening I went along to this years Turing Lecture, an annual presentation hosted by the British Computer Society (of which I’m a professional member) and the Institution of Engineering and Technology. This years lecture was given by Grady Booch, someone that most people in IT will either have heard of or, at the very least, been influenced by. He started his early career working on object oriented design and is currently passionately working on a project to collect the architectures of a hundred computer systems.

It’s difficult to pick out highlights, partially as there were quite a few but mainly because I wasn’t taking notes and can’t remember half of the parts that I pledged to write about. I do remember that he gave a one line summary of every decade from the 1920’s, culminating in the 2030’s being described as “The Rise of the Machines.” Hopefully not in the same way as in the Terminator movies. And on a smaller scale he talked about his front door bell crashing and why, as an IBM Fellow, he was using a PowerBook and mocking some less reliable operating systems.

The lecture was broadcast live on the Internet and, they tell me, will be available to watch probably by the time you’re reading this on the IET website. It’s well worth a watch.

Backup

Reading this article by David Pogue reminded me of my own search for a reliable and easy backup solution. I came to a rather different answer so I thought it might be worth detailing a little history, the options I considered and the one I eventually chose.

Ancient History

In the olden days — i.e., going back a couple of years — I split the files into four categories:

  1. System files such as the operating system itself and all my applications
  2. Email
  3. Small files such as Word documents and Excel spreadsheets
  4. Big files such as movies, photos and music

Category one, system files, I didn’t backup at all, figuring that I had all the discs and licence keys so I could recreate the whole thing if necessary.

I was much more careful with emails. While my archive gets a little spotty in places where I’ve not been able to transfer them from one computer to another, I have some messages going back to 1997 and have no intention of losing them. So I used the IMAP server ((There are two types of email server in common use at most ISP’s. “POP” is the most common and is designed to allow you to download messages but not to store them for the long term. “IMAP” is rather more sophisticated and email programs tend to cache messages that are permanently stored on the server.))provided by Apple’s .Mac service. This had the added bonus of allowing me to access my emails on my desktop, laptop and work machines.

For small files, category three, I also used part of the .Mac service, this time the iDisk. iDisk is basically an online disk ((Technically speaking, it’s just a web server accessed using the WebDav protocol. OS X cached the files so they were available off-line too.)), automatically copying anything saved there onto a server on the internet. Everything was magically and almost immediately backed up. Again, this allowed me to access my files pretty much anywhere.

I backed up larger files on a more ad hoc basis. I copied my photos onto CD (or DVD for larger projects) more or less as soon as they were downloaded from my camera. Usually. As ever, laziness had a tendency to set in and it didn’t always happen for a while after the event; not good. Music downloaded from iTunes followed a similar pattern.

Everything ripped from a CD was not backed up at all. I had no easy way of backing up over 20Gb of data and, in theory, I had all the source data anyway.

What happened next

There are (at least) three problems with this system. First is .Mac’s reliability, something I’ve already talked about. Second is the piecemeal approach, where some things are saved immediately and other things might not be backed up at all. And finally, if my hard-disk did ever die, how long would it take to restore everything?

When I got my new MacBook I realised how long it takes to set up all my applications. And when I wiped the hard-disk of my “old” iMac ready for sale on eBay I found how long it can take to install the operating system and get it up to date. For the record, it took the best part of a day and that was without restoring any data ((First install OS X 10.4.0. The plan was to install iLife and then run software update, but iLife required OS X 10.4.3. Unfortunately, one of the problems with 10.4.0 was that WiFi didn’t work so it took a while to work around that. Then I updated to 10.4.8, which was over 100Mb of files. Then I was able to install iLife, which also required some very large updates. In all, there were over 250Mb of updates to just iLife and OS X.)).

In reality, despite these problems, I was far too lazy to bother changing things. Until I decided on two things: not to renew my .Mac subscription and to get a new laptop not just to replace my ailing iBook but also to replace my iMac. Without .Mac I had no choice but to re-evaluate my options.

I am going to concentrate only on the backup side of things in this article, although that was not the only thing I changed. For example, rather than use an alternative IMAP provider for my email, I decided to move to Google Applications for Domains. My experience there is perhaps worth another post but for now I will move on.

For backups I considered two main approaches, online backups as promoted by David Pogue in the first link in this post and traditional backup software, copying data to an external disk.

The answer

I considered a few different online backup solutions. Their pricing structures varied slightly and I’m sure that they all worked, but after I thought about it I realised that none of them were ideal. Pogue mentions the main problem in passing: timing. I have about 100Gb of data that needs to be backed up. How long would that take, even on a broadband connection? Let’s see, 256kbps (the upload speed on my cable broadband connection) works out to around 20k/sec. A little bit of maths ((100 * 1024 * 1024 / 20 / 60 / 60 / 24 = 60.68)) suggests it could take 60 days to upload the whole disk! In the event of a disaster, it would be quicker (download speeds are ten times faster than upload speeds) to download everything but it would still be nearly a week! Even if my maths is bad (very much a possibility), I still think my bandwidth would make online backup impractical.

Note that I think online backup is a fantastic idea in principle, just not one that works for me. If you mainly have Word documents to backup I’m sure it’s cheap and easy.

So, software. I had been using Apple’s Backup software, which is mainly for use with .Mac but also works with external hard-disks. However it was slow and, according to reports on the Internet, very unreliable. The one time I ever needed it I had managed to get back my files but I didn’t really want to risk it another time.

My LaCie external hard-disk came with SilverKeeper backup software. I had a quick play about with it looked overly complicated. Having had to write backup software at work, I did have a tendency to look for “complete” solutions — i.e., get me that file I deleted three weeks ago — but on further thought I realised that I didn’t need anything so sophisticated at home.

Most other software I looked at was just as complicated and, worse, they all had the same failing: they don’t back up the operating system itself. I’d already seen that it takes nearly a day just to restore the system software before you even start looking at the data.

In the past I had played with the Carbon Copy Cloner (CCC), software that cloned a whole disk. I liked the idea — as the backup is just a complete, boot-able copy of the main disk — but I didn’t like the amount of time it took to copy the whole disk every time.

In the end I decided on Shirt Pocket’s SuperDuper!, which is like CCC except it has a “Smart Update” feature that only copies the files that have changed, making a weekly backup take, on average, less than ten minutes. Nice.

So that’s the system I’m using at the moment. Would something like that work for you too?

Hiring

Talking about Google’s old and new hiring practices seems to be all the rage at the moment, so I thought that I would get in on the act.

I got through two phone interviews for a technical consultant role here in the UK before being rejected. My second interviewer told me that he’d had fourteen interviews before being hired. That’s just an absurd number. How much holiday and sick leave can you take at short notice without arousing suspicion?! (They were both long enough or required Internet access that I couldn’t do them at work.)

By the end of the second I was in two minds whether to take things any further anyway. I wanted to work for Google, but could I go through fourteen interviews? I was concerned about the money, as no number was on the job spec and big names often offer low and offer options to compensate. I can’t pay my mortgage with stock options! And the work in the consulting side didn’t sound quite as appealing as the kind of thing you hear on the development side.

Most significantly, was the style of interview. They asked brain-teasers, which I tend to think is a lousy way to scope out a candidate. Either you know the trick and can do it instantly, you get lucky or you need a hint. None of these really shows how smart you are, how well you can program a computer, interact with clients or, indeed, any other aspect of the job. The interviewer was also clearly typing away in the background while I was trying to answer the questions, only half listening, which was just plain rude.

Most communications were friendly and personal, right up to the last. The rejection email was signed, impersonally, “Google Staffing.”

So overall I’m not terribly impressed with Google recruitment. Okay, maybe I’m biased against them as they turned me down but as an interviewer I’ve always considered part of my job as leaving a positive impression of the company even with candidates that are not going to be hired. Google failed in this.