Tracking FAST corners using SURF descriptors on iPhone 3G.

This video demonstrates my first attempt at tracking FAST corners using SURF descriptors on the iPhone 3G. The framerate is still quite low but I haven’t started optimizing the code for the phone yet. The table below shows the relative amount of processing time each step takes, hopefully each can be reduced. This post is related to my previous one: OpenCV and FAST corners on the iPhone 3G. An updated post featuring extremely fast FAST corner detection is here:

In short: I compiled static OpenCV libs for the iPhone and ported FAST and SURF to the phone. Here is the table indicating the relative CPU time per frame for every step of the cycle. Everything is calculated on a half-sized scaled version of the frames captures by die phone’s camera.

Step % CPU time per frame
Preparing camera feed frame for OpenCV structures: 18.08%
Calculating integral image for SURF: 6.03%
FAST corners: 3.62%
Calculating SURF descriptors: 66.31%
Corner matching/tracking: 0.54%
Drawing stuff on output image: 2.41%
Converting output image to CGImage 3.01%

Any comments and questions welcome.

, , , , , ,


  1. 1
    John Gilmore on Thursday 22 April, 13:15 PM #

    Hi, what iPhone OS are you using for this?

    Will I have to jailbreak my iPhone for this to work?

    If so, how do you think can this solution be practically implemented? i.e. What do you need from the new iPhone OS?

    • 2
      Carel van Wyk on Thursday 22 April, 13:34 PM #

      Thanks for the comment!

      I’m running iPhone OS 2.2.1 . This should be possible in OS 3.x if you know what you’re doing and possibly OS 4.
      At this stage you would have to jailbreak your phone for this to work because there is no way to obtain raw frame data from the camera using Public APIs from the SDK. Obviously as soon as Apple releases a public API that will allow processing of camera frames we will see an explosion of new and innovative iPhone apps that wasn’t possible before, but that’s a whole different story.

      For now I’m not really worried about these political issues. The code is standard C/C++ so if Apple remains unwilling to give developers what they need, the code can easily be ported to more coder friendly platforms like Android/Maemo.

  2. 3
    Vivek Anand on Thursday 09 December, 17:34 PM #


    I have some question. How are you passing the FAST points to the SURF algorithm in OpenCV. FAST returns a vector of cvPoints and its not based on any pyramid structure. In SURF if you want to supply your set of keypoints, you would need other information such as orientation, scale, laplacian value, etc. So how do you compute this SURF descriptors.

  3. 4
    Carel on Thursday 09 December, 17:59 PM #

    If you want to use multi-scale SURF descriptors you will have to generate image pyramids yourself and do FAST detection for each pyramid level.
    I don’t think you need any information other than scale and position. As far as I am aware, orientation and the point’s laplacian sign is added when the descriptor is being calculated. Or am I mistaken?

    I did not use multi-scale SURF, I only performed FAST corner detection on one level so I used a scale of 1.

  4. 5
    aaron on Wednesday 20 April, 19:47 PM #

    Hi. Have you considered using PhonySift instead of SURF? According to the report here: it is fast and accurate.

  5. 6
    aaron on Wednesday 20 April, 19:54 PM #

    Also, another point would be to check that you have the optimizations switch turned on in XCode for that project. That can make a significant improvement to performance.

Leave a comment

Leave a Reply