Sandboxing and URL Schemes

Josh Centers at TidBITS:

But with Twitter's recent announcement of App Graph, another explanation for the company's desire to dominate the user experience has appeared: Twitter wants to collect personal information from your devices. App Graph will use the official Twitter app to gather the list of apps installed on your iOS devices and send that list back to Twitter. (It seems to do this by scanning a list of x-callback-urls — a method of inter-app communications developed before iOS 8's Extensibility functions.)

This motivated me to submit Radar 19156479 to Apple:

Product: iOS

Classification: Security

Reproducibility: Always

Title: Sandboxing and URL Schemes

Description: The iOS app sandbox prevents an app from directly accessing a list of other installed apps. The ability to determine whether the device has an app that responds to a given URL scheme circumvents that protection. There are good reasons for this. An app should not offer to open Google Maps on a device that does not have Google Maps. That said, deriving a list of all installed apps and sending it somewhere seems like something the app sandbox should make impossible.

Steps to Reproduce:
1. Attempt to write an iOS app that, without the user's knowledge or consent, gets a list of all apps installed on the device and sends it to your server.

Expected Results:
I would expect to find this impossible.

Actual Results:
I can derive a reasonably complete list of installed apps using a list of URL schemes such as the one available at I can iterate through the list of URL schemes, calling [[UIApplication sharedApplication] canOpenURL:] for each URL scheme to determine whether an app responding to that URL scheme is installed.


Version & Build:
iOS 8.1.1 (12B435)

Additional Notes:
I do not have a perfect technical solution to this problem. At bare minimum, I think App Review should reject apps that abuse the canOpenURL: call.

Calls to canOpenURL: and openURL: could require an entitlement, perhaps with a specific list of URL schemes. App Review could require developers to explain how and why they need to open URLs with those schemes in order to function properly.

If you agree that this is a weakness that Apple should address, please consider filing a duplicate of this report.

Another Grand Jury Trial

Judd Legum writing for ThinkProgress:

If McCulloch wanted to, he could present evidence in the case to a new grand jury and seek an indictment of Wilson. Although a constitutional protection known as “double jeopardy” says you can’t be tried for the same crime twice, the provision has not yet been triggered since Wilson was never even charged.

There is a provision of Missouri Law — MO Rev Stat § 56.110 — that empowers “the court having criminal jurisdiction” to “appoint some other attorney to prosecute” if the prosecuting attorney “be interested.” (The term “be interested” is an awkward legal way to refer to conflict-of-interest or bias. The statute dates from the turn of the 20th century.)

I don't have enough information to know that Darren Wilson should be convicted. It is possible that he should not even be indicted. The problem is that Bob McCulloch made no legitimate attempt at an indictment.

Back to Jekyll

I wrote earlier this week that I chose to host this blog on Squarespace and to leave my old posts behind. My experience with Squarespace was not good, so I am instead reviving my old Jekyll blog.

At some point I might consider using WordPress or something similar, but I like the fact that Jekyll generates static pages. That alleviates a huge set of security and performance concerns. I would like the ability to post from my phone, but honestly I am not sure that I would use that capability if I had it.

I do promise that this blog will not turn into a blog about the blog.

A Commitment to Blogging

I had a great time attending CocoaConf Boston over the past few days. At that conference, Rob Rhyne gave a session where he discussed the importance of talking about one's work while in progress. Until now, I have rarely talked online about my current projects.

This blog will not be exclusively focussed on my work, but the need to talk about my work outside the context of my company web site inspired me to create this blog. I had a site and blog here before, but the site was so sparse and posts were so laughably infrequent that I saw no reason to bring that content forward.

I am using Squarespace to host this site. I did not want to maintain a new server instance or add moving parts to an existing server. Jekyll is fantastic for generating static sites, but I wanted the ability to dash off a short post from my phone. Jekyll does not make that easy. Since I am using my own domain name and since Squarespace makes it easy to export content, I can easily move this site if Squarespace becomes unsuitable.

In order to force myself to post regularly, I am publicly committing to at least one medium-length post per week.

Update 11/21/2014: I ended up moving back to my Jekyll-based blog.

A Retrospective: CloudPull and Google Reader

With the Google Reader shutdown, I have been reflecting on the CloudPull product decisions I made as they relate to Google Reader in general and to the shutdown in particular. I also thought it would be interesting to share my thoughts with others.

Here is a list of relevant decisions, and what I think about them in retrospect.

May 2011: What to back up

In the beginning, CloudPull backed up:

  • The title and URL for each of your Google Reader feed subscriptions.
  • The title and URL for each of your Google Reader starred articles, shared articles, liked articles.

CloudPull did not back up:

Content from articles: I could have backed up the article content from within the feed, and I could have even made a web archive of the article content from the web site serving the feed. I decided that neither were necessary. The article content from the originating web site is outside of the Google account and therefore outside the scope of what CloudPull should be backing up. The content within a feed is often incomplete, and is almost always mirrored on the originating web site. While the decision not to back up article content seems reasonable to me, I did get feedback shortly after the version 1.4 update asking to back up article content. I can't implement every feature request I get, but in hindsight I should have implemented the ability to back up article content from the feed.

Not backing up tagged articles: Honestly, I did not back up tags because I strongly suspected that they were little-used. In addition, Google Reader automatically applies tags to articles based on their assigned subscriptions. Backing up tags that were assigned to subscriptions would have significantly slowed down backups. I have mixed feelings about this, but I think this was the right decision at the time.

Not backing up all articles Another approach would have been to back up every single article in your feeds that was available via the API. This is not functionality I ever wanted for myself, and I never saw any customer feedback indicating that it would be desirable. More on this later.

November 2011: Liked and Shared Articles

In October or November of 2011, the concepts of “liking” and “sharing” articles were removed from Google Reader. In its place was new functionality to share articles via Google Plus. I adapted to this by keeping existing liked and shared articles, but no longer updating the lists or backing them up for new customers and accounts. While the API still provided the ability to back up this data, the Google Reader API was never supported by Google and I had little confidence that this aspect of the API would keep working. I think this was the right decision.

February 2012: Liked and Shared Articles

CloudPull 2 introduced the concept of snapshots. Since liked and shared articles were gone from the Google Reader user interface, and since no new customers had asked for this ability in CloudPull 1 after they had been removed for new customers in November, I decided to treat liked and shared articles as deleted data. From a user's perspective, they essentially had been deleted from Google Reader, and no one seemed to miss them. I handled this by treating liked and shared articles as items that had been deleted from Google as of the first CloudPull 2 backup. The items were available via old snapshots until (by default) 90 days later when the snapshot was deleted. I think this was the right decision, and never got a single bit of feedback that suggested otherwise.

January/March 2013: Free version of CloudPull

In January of 2013, I added a version of CloudPull to the Mac App Store that was free for a single account. With an in-app purchase, you could unlock premium features such as the ability to back up multiple accounts. In March, I changed the way the Direct Download edition works so that, instead of providing a free 30-day demo, it had the same free-for-one-account free trial model.

In general, changing the sales model provided a significant reduction in Mac App Store sales and a significant improvement in direct download sales. I may never understand why customers from the two sales channels reacted so differently to what is effectively the same change.

Both free trial models have significant limitations as they relate to the Google Reader shutdown. With a 30-day trial period, CloudPull would perform continuous backup for 30 days and then stop backing up your accounts. However, it would continue to provide full access to your backups indefinitely. The ability to perform continuous backup is no incentive at all for customers looking to create a one-time archive of a service that is going away.

The single account model is also difficult, because I strongly suspect that using multiple Google accounts for Google Reader is very uncommon.

Considering I did not know until afterward that Google Reader was shutdown, I do not regret either the original 30-day free trial demo model or later changing that to a single-account sales model. But I should have reconsidered them later on.

March 13, 2013: Google Reader shutdown announced

The day the Google Reader shutdown was announced, I pre-announced an update that would restore the ability to back up shared and liked articles and that would allow these articles to be exported into HTML Bookmarks files. This was a very good decision; the only thing I question is why I did not include the ability to export article links into HTML Bookmarks files on day one.

I ordinarily live by the cardinal rule of never, ever, ever pre-announcing a ship date on a software feature that I have not written. But in this case it was the right decision. I knew I could easily meet my own deadline, and I knew that customers would be asking for information.

Also, on this day, I came across a tweet from Tom Church pleading for some way to retrieve tagged articles. The next day, T.J. Usiyan posted a complaint about Google Takeout not offering export of Google Reader tagged articles. I knew I needed to do something, and I also knew I could not get it into the update that I had already pre-announced. So, a subsequent update was born.

I also realized that, primarily as a side-effect of keeping XML blobs around for diagnostic purposes, CloudPull did have a copy of article content from within the feeds. It never exposed that data, but it was there. So, I changed the way Quick Look previews work to expose that content. I heard from several customers who really wanted that content, because Google Reader had the only copy of some articles whose original sites had shut down. I am extremely happy that I added this functionality; I only wish I had included it from day one. I also wish that CloudPull provided a better reading experience for that content, but I can add better reading functionality in a future update for customers that have CloudPull-based Google Reader backups.

In the end

I am very glad that I released both of these feature updates, but I do have some regrets about my handling of the shutdown.

First, I should have made the improvements in these two updates premium features only available to paying customers. My impression is that many customers have downloaded the app, used it for free to back up their single Google Reader account, and had no need to buy anything from me because I just gave the functionality away. I appreciate that many did buy out of a sense of gratitude and that many others may eventually buy because they now see what the app can do. But not charging money for the new capabilities that I added in these two updates was a missed opportunity. Ideally I would have made backup of Google Reader a premium feature, but there was no good way to do that while grandfathering in existing free customers.

Second, I regret that the app never backed up all articles. Honestly, I never saw this as meaningful, and this is not functionality that I ever wanted. But based on feedback I saw during the last week of Google Reader's existence, and based on the popularity of Mihai Parparita's reader_archive tool, I think this would have been functionality that many would have appreciated. But I did not realize this until it was too late.

Finally, I grossly underestimated the extent to which people would have moved on to other reading systems long before the July 1 date. I originally announced that I would release an update that disabled backup of Google Reader on Friday, June 28. I later updated the date to June 30. But that week and throughout that weekend I saw big increase in web site traffic and sales, largely due to customers looking to back up and archive their Google Reader data. I ended up not releasing the update that disabled Google Reader backup until after the shutdown finally happened. I looked a bit silly delaying the update twice.