The biggest single thing I wanted MobileMe for, it won’t do – and it’s not clear in advance that it won’t. That thing is using MM email with your own domain name. I was hoping to move my email into it from a another provider, but I won’t be doing that now. So, for my money, the email provided with MM is useless because you can only reply from your @me.com address. Google have got this right – why can’t Apple?
Now I’ve got that off my chest, there are some things to like in MM. I like the way it syncs my calendars and contacts lists wirelessly with the ones on my iPhone. Critics might say that the iPhone should do that anyway, but let’s just accept that it doesn’t. MM is a welcome addition, then, to someone who has a Mac and an iPhone.
The iDisk looks cool. It’s 10-20 Gigs of storage which is always kept replicated onto and Apple server somewhere. In theory, you can recover your files onto a new machine or use it for sharing. I haven’t quite figured out yet how to use it to keep a permanent copy of my favourite document folders in the cloud without manually copying them or setting up an rsync, but there’s probably a way. And it will let you do a photo gallery online, along with various kinds of website. But, then, I am already well-supplied in that department.
There are a few other pieces of goodness I haven’t tried yet, such as getting into my Mac at home when I’m out and about, or getting remote access to my Time Capsule drive. But I already do these things without MM.
All in all, I suppose it’s okay for $140 NZ per annum.
You may have gathered that I’m not taken with MobileMe as much as some other products from the same stable. It strikes me as, well, too little, and surprisingly hard to set up. I expected more. I hope Apple will continue developing it into something a bit less underwhelming.
Comments Off on MobileMe – what was the problem, again?
A short essay about value in software. Conclusion: CIOs and government need to take a very good look at free software for desktops and other generic software.
I’m at Foo Camp listening to Ben Goodger of Google and Robert O’Callaghan of Mozilla. Both are deeply technical people who are developing web browsers – Ben is developing Google’s Chrome browser, and Robert is part of the Firefox team. (And he lives in Auckland, along with several other FF team members.)
Ben and Robert telling us what’s coming up in both their browsers, and they are asking the web developers in the room – a couple of dozen – what they want to see in the browser. This is such a great way to develop browser code – there’s a direct feedback loop between the guys who develop the new stuff and the people who exploit it to make cool websites.
Comments Off on Feeding the inner geek
You’ve seen the ‘Target’ word puzzle that runs in most daily newspapers. It looks like a 3×3 square of letters with the central letter highlighted. Your mission (should you choose to accept it) is to make as many dictionary words as possible out of the letters in the puzzle, and including the central highlighted letter. There’s always one nine-letter word.
I quite enjoy looking at the puzzle and trying to get the long word, but I lack the patience to list out all the others. A couple of years ago I decided to try to automate doing the puzzle – yes, I know it’s cheating – and here are the results. Read on for some geeky Python stuff.
The New Zealand Institute has written a series of think pieces on what it calls the “weightless economy” – it means using broadband to ship ideas around the world rather than ships to send dead trees and animal carcasses. They aren’t the only people to have observed what an opportunity the Internet offers the New Zealand economy, more than most countries because of our remoteness, but they are very eloquent and couch it all in language that economists understand.
You might think that would be welcome by bureaucrats and politicians alike. Non-polluting, renewable, no food-miles, etc. And both major parties have promised to spend up in varying amount to improve New Zealand’s Internet. That makes it all the more surprising that the government is apparently trying to kill the Internet in New Zealand off altogether. That’s right – S92A of the Copyright Act, which ministers have just told us to “like or lump” risks chilling new services on the Internet so they never get started, and driving the companies that distribute Internet out of existence. (Most of them are barely profitable now; it’s the sexy service companies like Google that make the big bucks. Go figure.)
Most of us agree that copyright needs some kind of protection in the digital world. Killing the Net to achieve it is too high a price. That’s what I talked about on Radio New Zealand National today – read on for my notes or download the audio as ogg or mp3. (more…)
I’m referring to Te Reo Māori, the language of the indigenous people of Aotearoa/New Zealand, and an official language of our country. Even those of us who have no Māori blood should be proud to have a unique language as part of our country’s identity.
Te Reo presents some difficulties to printers and web publishers who assume that it is spelled using the standard Latin alphabet that English uses. It isn’t – Te Reo distinguishes between long and short vowels. A long vowel has the same sound as a short one but it is held for longer. In writing, long vowels are marked with a macron, which is a diacritic appearing as a horizontal bar over a letter. Anyone who learned Latin at school should remember them.
Marking long vowels in Te Reo is not optional if you expect to be understood. A word that has a short vowel becomes a totally different word if it is said or spelled with a long vowel. For instance, keke in English is cake, whereas kēkē is armpit. This is a mistake which could be hilarious, or more likely, rude.
Vowels with macrons don’t appear in ASCII, or even extended ASCII (which contains some European accented letters). But they do appear in Unicode. The correct way to display macrons on the web is to use escaped unicode. Here’s a list of the five vowels with macrons, in upper and lower case:
People have used other ways besides Unicode to capture macrons. One way that used to be used was patching the fonts on a computer so that umlauted letters appeared as letter with macrons. An umlaut is two dots over a vowel, a diacritic used in German among other languages. It raises a vowel – the difference between “rat” and “rate”. It doesn’t lengthen a vowel. This approach has problems, such as not being able to write German words any more. But the biggest problem is that it is not portable, because when you copy from a system that (mis)uses umlauts to one that correctly uses umlauts, you get this kind of thing:
This is a poor example from our state-owned television broadcaster!
A week ago I posted about my attempts to automate an administrative task with Python. In essence, I’m trying to scrape links to sound files off the Radio New Zealand site and insert them into an entry on my blog. I can only test this on Thursdays, because the links are only present then.
Read on for my experiences yesterday and a revised program.
As you know, I post my radio speaking notes as blog entries. At Miraz’s suggestion, I load these entries in advance and set them to publish automagically while I am on air. WordPress is clever like that.
Sometime after that, Radio New Zealand puts my radio slot online as sound files in ogg and mp3. Thanks, guys. But, I don’t know in advance what the file names are going to be so I can’t link them directly from my post. In practice I generally link to the download page for the whole of Nine to Noon and leave it at that.
Recently, Hamish wrote to me and suggested that I link my sound files directly from my post. I told him that I was far too lazy, but it has set me thinking – surely I can get a computer to do this.
This is the first in an ongoing series of posts about a little programming project I’ve started to automate the process of adding links to the sound files as they become available. I’ll collect the program together in a page on the site.
If you have any interest, read on and see just how easy free and open source tools make it to throw together something like this.
I try to make this blog work for everyone – that’s why I have the font-size changer in the right hand column, so that everyone can read it despite my somewhat outré choice of white on black. And that’s why I was disturbed to find that Microsoft Internet Explorer 6 doesn’t render it properly.
On the right is what this blog looks like in IE6. Two things are wrong. The white background around the green Adium logo shouldn’t be there – that background is set as transparent. IE6’s forerunner, IE5, gets that one wrong as well. It’s ugly, but I can live with that problem.
The really, really annoying thing as far as I’m concerned is that the right hand column displays below the main column, so most people will never see it. Wrong, wrong, wrong.
It’s telling that only IE6 of the 50-odd browsers I tested using browsershots.org got this wrong. IE6 is old, and home users will probably have upgraded by now (hint), but many are stuck with IE6 at their workplaces, where IT departments like to control desktop configurations and need a very good reason to change versions. And the statistics for this blog show that IE6 makes up only about 2.5% of visitors, but that might be because the site is barely usable in IE6.
This must be an example of IE not following standards. Lots of websites have separate code – effectively separate web pages – for IE browsers so that their pages render the same way on IE as they do on other browsers.
I’m left wondering why IE was so non-compliant for so long. I’d like to find an explanation besides incompetence or hubris in assuming it could ignore standards and force the web to its bidding. To its credit, Microsoft realises it has a problem in this area and the latest IE8 beta makes a real effort to be more standards-compliant. That leads to other problems for sites with IE-specific code, but let’s not go there now.
In the meantime, I’m faced with trying to debug this thing for an old browser on a platform I don’t own, or just giving in and accepting that some people won’t be able to read it even if they want to. Sigh.
Like many others on the ‘Net, I’ve often repeated the old saw that His Billness decided in the Dark Ages of computing (the early 80s) that 640k should be enough for anyone. It appears that, in the words of the Mythbusters crew, that one’s well and truly busted. There’s no evidence he said that, and he did realise at the time that the whole 640k thing would place a limit on growth. He did admit later to being surprised at how quickly that limit was reached, though.
So, Bill: I’m sorry for telling people you made that call.