CoreImage and UIKit coordinates

Recently I found a very nice CoreImage tutorial for iOS: Easy face detection with core animation in iOS5 and after downloading the source I found that this quite popular tutorial was teaching something wrong, coordinates system conversions!.

I can't remember how many bugs/problems I had because of not converting the coordinates well or using the incorrect coordinate system when I was at the uni. So I decided to write this little post so my great audience (approx. 6 viewers per day) can benefit from it :)

CoreImage Coordinate system

(0,0) x-axis y-axis

In CoreImage, each image has its own coordinate space its origin in at the left bottom corner of the image. Each images's coordinate systems is device independent.

UIKit Coordinate System

(From View Programming Guide for iOS)


The origin of an UIViews frame is at the top left corner and its coordinates are in their superview coordinate space. They are not independent.

This means that in CoreImage each image has its origin as {0.0} while in UIKit views are not necessarily like so

Converting coordinates


We should convert CoreImage coordinates to UIKit coordinates system, not the other way around, because chances are that most of your code will be in UIKit coordinates and because other iOS developers will expect {0,0} to be at the top left corner, be nice and don't change everything just because of CoreImage. :)

This is easily done with an affine transform:
Where ui and ci subindexes mean UIKit and CoreImage coordinates respectively and h is the height of the image in regard.

We could do this manually but happily there are a bunch of functions for this task like: CGAffineTransformMakeScale, CGAffineTransformTranslate, CGPointApplyAffineTransform and even CGRectApplyAffineTransform!. Thanks to Apple for making our life easier.

The code


// Create the image and detector
CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                          context:... options:...];

// CoreImage coordinate system origin is at the bottom left corner
// and UIKit is at the top left corner. So we need to translate
// features positions before drawing them to screen. In order to do
// so we make an affine transform
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform,
                                    0, -imageView.bounds.size.height);

// Get features from the image
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features) {

  // Get the face rect: Convert CoreImage to UIKit coordinates
  const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

  // create a UIView using the bounds of the face
  UIView* faceView = [[UIView alloc] initWithFrame:faceRect];

  ...

  if(faceFeature.hasLeftEyePosition) {
    
    // Get the left eye position: Convert CoreImage to UIKit coordinates
    const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
    ...

  }

  ...
}


You can download the sample from here and see the result is pretty much the same as the original tutorial. Only this time we didn't scramble with the coordinate system :)

In case, you didn't notice, the original example changes the whole window coordinate system causing its origin to be at the bottom left (like Cocoa in the Mac) hence the imageView appears at the bottom.


Another twilight theme for Xcode

I just created my ...


err ...


actually just tweaked Twilight theme a bit and named it "Twilight Console" It resembles Twilight but it uses font Monaco size 10 and the most important part (at least to me) is the console is also customized:


If you play with the console very often (gdb, llvm debugger, etc) you might like this theme.

Download TwilightConsole.dvtcolortheme or get if from Github :)


This work is licensed under BSD Zero Clause License | nacho4d ®