Saturday, February 6, 2010

screen view change basing on user face position

Here is another project from my free-time personal exploration of CV (Computer Vision).
The idea is to enhance human to computer interface via the detection of user face position in front of the screen.

The screen image is updated in real-time taking into account the head position of the user.

This is one of the many possible ways of building a 3D vision, but this method works only for one user at a time.




This project is developed in Processing, and requires the OpenCV library. I used the same setup described in the Arduino Processing Face Follower and in the previous Two Axis Controlled Laser Gun

The main idea is to adapt the presentation of a 3D image shown in a window, taking into account the user face coordinates, as detected from the webcam incoming flow.


Here is a (er.. ugly) video
I am sorry but the quality is quite low. My phone camera is not that good.









Description
A tridimensional object is shown on the screen, and it is rendered taking into account the viewing position of the user, determined by the position of his face, as detected by the webcam.
The code is not particularly clean and should be improved, but works. A future version will reduce some flickering due to mistakes in continuosly detecting the user face position, expecially near the boundaries of the webcam viewing field. (this was implemented in v. 0.3)
Code is reasonably parametric, basing on the initial windows resolution specifications maxx and maxy. If you change these values, or if you have a different webcam, some tweaking might be needed in the map statement used to define the value of r. (version 0.3)




Requirements
  • A working Processing and OpenCV installation ( I did it on a Windows XP SP3)
  • A webcam.
Important: you need to have the haarcascade_frontalface_alt.xml in your processing sketch directory in order
to have face detection algorithm working.
Please follow description of the setup in my other previously mentioned projects.



Code
Here is the Processing Code.
here is the 0.1 version (jan 31 2010)
here is the 0.2 version, with autorotation features.
here is the 0.3 version, with distance sensing and size dependent from distance (feb 6 2010)
here is the 0.4 version, with side colored lights and 3D boxes scattered around.

code listing of v 0.2 version is here:

/*
  Processing and OpenCV code
 
  Detects viewer face movement and redraws scene basing on face position
  by Marco Guardigli, mgua@tomware.it  @mgua on twitter
  see http://marco.guardigli.it/2010/01/screen-view-change-basing-on-user-face.html
 
  This code is released under GNU GPL license. See http://www.gnu.org
  jan 31.2010  v 0.1   Marco Guardigli
               v 0.2   Marco Guardigli, added autorotate
                                        disabled distance sensing (not working well)
                                        and introduced distance link to mouseX
 
*/

import hypermedia.video.*;  
OpenCV opencv;

boolean DEBUG = true; // set to TRUE for some debug output
boolean AUTOROTATE = true;  // set to true to enable autorotate

int maxx = 800;        // window size x
int maxy = 600;        // window size y
int cfacex, cfacey;    // center of the first face detected
float fw, fh;          // face width and face height (in relation to window size)
float rmin, rmax;      // range of perceived distance
float fwmin, fwmax;    // possible range of face width  (auto defined)

float ex, ey, ez;      // coordinates of the camera position (eye)
float upx, upy, upz;   // rotation of the camera (default 0,1,0);
float cx, cy, cz;      // center of the scene (where the camera points)
float ra;              // distance of the camera from center of the scene
float r, r_p, r_pp;    // 3 last values of measured distance, for averaging and smoothing

float drrp, maxdr;      

float thxz, thxy, thyz;   // angles on the three planes xz, xy, yz, growing counterclockwise 0-359
float fthxz, fthxy, fthyz;   // angles on the three planes xz, xy, yz, for face
float athxz, athxy, athyz;   // angles on the three planes xz, xy, yz, for autorotate

float dthxy;    // quantum of autorotate angle change (degrees)
float dthxz;    // quantum of autorotate angle change (degrees)
float radbydeg = TWO_PI / 360;  // radians per degree

void setup() {
  size(maxx, maxy, P3D);
  fill(204);
  r = 100;
  cx = 0;  cy = 0;  cz = 0;
  ex = r;  ey = 0;  ez = 0;
  upx = 0; upy = 1; upz = 0;
  thxz = 0; thxy = 0; thyz = 0;

  rmax = 0; rmin = r;           // initial values for minmax detection (switched on purpose)
  ra = r; r_p = r; r_pp = r;    // previous values for averaging
 
  fwmin = 1; fwmax = 0;         // initial values for minmax detection (switched on purpose)

  opencv = new OpenCV(this);
  opencv.capture( maxx, maxy );
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );    // load the FRONTALFACE description file
  opencv.read();
  if (AUTOROTATE) {
      dthxy = 0;
      dthxz = 5;    // vertical axis
  }
}

void draw() {
  lights();
  background(0);
  camera(ex, ey, ez, cx, cy, cz, upx, upy, upz);
  stroke(255,255,0);
  box(50,30,60);   // draw a 3d solid composed of
  box(40,20,70);   // three simple intersecating
  box(60,10,10);   // boxes
 
  opencv.read();
  Rectangle[] faces = opencv.detect();     // detect anything resembling a FRONTALFACE

  if (faces.length > 0) {       
    cfacex = faces[0].x; cfacey = faces[0].y;
    fw = faces[0].width;
    cfacex = cfacex + int(fw / 2);        // cfacex = x center of face 

    fh = faces[0].height;
    cfacey = cfacey + int(fh / 2);        // cfacey = y center of face 
    fh = fh / maxy;

    fw = fw / maxx;                       // portion of screen width taken by face width
    if (fw < fwmin) { fwmin = fw; }       // detect min and max face width for autoscale
    if (fw > fwmax) { fwmax = fw; } 
  
    fthxy = map (cfacey,0,height,-40,40);  // input range for xy is 80 degrees (-40..40)
    fthxz = map (cfacex,0,width,-50,50);   // input range for xz is 100 degrees (-50..50)
    athxy = (athxy + dthxy) % 360;         // dthxy and dthxz are autorotate steps
    athxz = (athxz + dthxz) % 360;
    thxy = (fthxy + athxy) % 360;          // add autorotate to face position
    thxz = (fthxz + athxz) % 360;

    ex = ra * cos(thxz * radbydeg);   // new camera coordinates x
    ey = ra * sin(thxy * radbydeg);   // new camera coordinates y
    ez = ra * sin(thxz * radbydeg);   // new camera coordinates z
   
    ra = mouseX + 100;

    if ( DEBUG ) {
      println("cfacex,cfacey=[" + cfacex + "],[" + cfacey + "]");
      println("facewidth,faceheight=[" + fw + "],[" + fh +"]");
      println("fwmin,fwmax=[" + fwmin + "],[" + fwmax +"]");
      println("r, rmin, rmax=[" + r + "],[" + rmin + "],[" + rmax + "]");
    }
   } else { println("face not detected");  }
 }



Similar projects
TrackEye, Real-Time Tracking Of Human Eyes Using a Webcam:  by Zafer Savas, is aimed at detecting user eyes coordinates.

Head Tracking for Desktop VR Displays using the Wii Remote: by Johnny Chung Lee, is a very nice hack that uses a wii remote device to sense the user head position thru the wii infrared sensor



Marco  ( @mgua on twitter )

.

No comments:

Post a Comment