Emgu CV for iOS: Difference between revisions

From EMGU
Jump to navigation Jump to search
Added instruction for creating an iOS project
mNo edit summary
Line 13: Line 13:


*Add reference to the projects. The following reference is needed for FaceDetection: Emgu.Util.MonoTouch, Emgu.CV.MonoTouch. We need Emgu.CV.GPU.MonoTouch here because we are going to share the engine source code with Windows and Linux, which can take advantage of GPU if CUDA compatible one is presented. If you do not plan to reuse source code that take advantage of GPU on desktop/server platform, you can skip this reference. We will also need to reference OpenTK because this reference provide System.Drawing namespace on MonoTouch. We can also add MonoTouch.Dialog-1 if we want to use it for GUI development.
*Add reference to the projects. The following reference is needed for FaceDetection: Emgu.Util.MonoTouch, Emgu.CV.MonoTouch. We need Emgu.CV.GPU.MonoTouch here because we are going to share the engine source code with Windows and Linux, which can take advantage of GPU if CUDA compatible one is presented. If you do not plan to reuse source code that take advantage of GPU on desktop/server platform, you can skip this reference. We will also need to reference OpenTK because this reference provide System.Drawing namespace on MonoTouch. We can also add MonoTouch.Dialog-1 if we want to use it for GUI development.
[[File:MonoTouchProjectReference.png]]


*Now go into project configuration and make sure Linker behaviour is set to "Dont' Link" for both debug and release, for both iphone simulator and device, otherwise you may see compilation errors when compiling the project.
*Now go into project configuration and make sure Linker behaviour is set to "Dont' Link" for both debug and release, for both iphone simulator and device, otherwise you may see compilation errors when compiling the project.

Revision as of 18:36, 29 April 2012

  • Coming Soon
  • Prerequisite
    • You will need MonoTouch for iOS development. Click here to purchase a license.

Creating a MonoTouch Project

We will show you how to create a new MonoTouch Project for iOS.

  • First, download Emgu CV for iOS and extract it to a local drive. From Mac OS, open the file "Solution/MonoTouch/Emgu.CV.MonoTouch.sln". You will find a MonoTouch solution with some sample projects. In this case we will walk you through creating a new face detection sample.

  • Add a new MonoTouch universal project (for both iPhone and IPad) and we will name the project FaceDetection

  • Add reference to the projects. The following reference is needed for FaceDetection: Emgu.Util.MonoTouch, Emgu.CV.MonoTouch. We need Emgu.CV.GPU.MonoTouch here because we are going to share the engine source code with Windows and Linux, which can take advantage of GPU if CUDA compatible one is presented. If you do not plan to reuse source code that take advantage of GPU on desktop/server platform, you can skip this reference. We will also need to reference OpenTK because this reference provide System.Drawing namespace on MonoTouch. We can also add MonoTouch.Dialog-1 if we want to use it for GUI development.

  • Now go into project configuration and make sure Linker behaviour is set to "Dont' Link" for both debug and release, for both iphone simulator and device, otherwise you may see compilation errors when compiling the project.

  • Go to advance tab of the configuration, for supported architecture, you may wants to select ARMv6+ARMv7 for both debug and release, for both iphone simulator and device. This allow the compiled program to run on all versions of iPhone and iPad. Enabling ARMv7 also means if the iOS device contains an ARMv7 CPU, it can take advantage of the ARMv7 CPU architecture. For release mode on IPhone Device, you may wants to enable "Use LLVM optimizing compiler" for best performance.

  • Next we will need to add the data files to the project. We will add the image "lena.jpg", the haarcascade files "haarcascade_eye.xml" and "haarcascade_frontalface_default.xml" file as project content. We will also add the "DetectFace.cs" file from our windows Face Detection example as a linked file, such that the computational routine is shared by the desktop version and the mobile version of the Face Detection program.
  • Now go to the AppDelegate.cs file and we will implement the GUI components. If you have added MonoTouch.Dialog-1 as a reference to the project, we will use it for the Face Detection GUI. In the top of the source code, add the following using statements:
using Emgu.CV;
using Emgu.CV.Structure;
using Emgu.Util;
using System.Drawing;
using MonoTouch.Dialog;

Add the following code in the body of the FinishedLaunching function:

         RootElement root = new RootElement("");
         UIImageView imageView = new UIImageView(window.Frame);
         StringElement messageElement = new StringElement("");

         root.Add(new Section()
                 { new StyledStringElement("Process", delegate {
            long processingTime;
            using (Image<Bgr, Byte> image = new Image<Bgr, Byte>("lena.jpg"))
            {
               DetectFace.DetectAndDraw(image, out processingTime);
               using (Image<Bgr, Byte> resized =image.Resize((int)window.Frame.Width, (int)window.Frame.Height, Emgu.CV.CvEnum.INTER.CV_INTER_NN, true))
               {
                  imageView.Frame = new RectangleF(PointF.Empty, resized.Size);
                  imageView.Image = resized.ToUIImage();
               }
            }
            messageElement.Value = String.Format("Processing Time: {0} milliseconds.", processingTime);
            messageElement.GetImmediateRootElement().Reload(messageElement, UITableViewRowAnimation.Automatic);

            imageView.SetNeedsDisplay();
         }
         )});
         root.Add(new Section() {messageElement});
         root.Add(new Section() {imageView});

         DialogViewController viewController = new DialogViewController(root);

The code above will create a "Process" button, such that once it is clicked, it will load "lena.jpg", perform face and eyes detection and draw it on iOS. The function DetectFace.DetectAndDraw is the same function that is provided in the Windows Face Detection example that we re-use for mobile development.

  • We are ready to test the face detection program on Simulator. Just click the run in MonoTouch and the program will be build. The simulator will open up to install the program and run it from there. Just click the "Process" button on the simulator and the result of the face detection will be displayed.

File:MonoTouchFaceDetectionResultSimulator.png"