Friday, October 30, 2009

ICM Class Notes - Using Images and Video - October 22, 2009

Here are my notes from last week’s ICM class, where we learned how to use pictures and video in Processing. It’s taken me some time for me to get around to posting these notes because I have been focused on developing the media controller for physical computing, and working on my mid-term project. Without further ado here are my notes.

Using Images in Processing
There are two main types of activities related to using and manipulating pictures in Processing:
1. Loading and displaying images
2. Reading and manipulating the pixels

Loading and Displaying Images
In processing there is a class that handles images that is called PImage - in many ways this class is similar to the PFont object. For example, to create a new instance of an image object the loadImage(URL) function is used rather than a “new” object declaration. Here is sample code:

PFont cat;
cat = loadImage(“cat.jpg”);

The image() function is used to draw onto the screen images that are loaded. This method requires from 3 to 5 arguments, here is the complete set: image(PImageVar, xpos, ypos, width [optional], height [optional]). We can assign an image to the background() function as well, in which case the image would be displayed as the background. Here are some useful methods associated to displaying images:

  • The imageMode() function allows setting of image alignment. Setting options include CENTER, CORNER (standard setting) and CORNERS.
  • The tint() function enables changing the color of the image. It does not color the picture but rather removes the other colors. So if you change the tint to red, processing will set all G and B value to “0”. Tint also supports setting the transparency of an image.
  • The PImage.get(xpos, ypos) function returns the color from the pixel at the coordinate xpos ypos. Get() can be used without an image to get the color of the pixel at coordinates xpos and ypos of the processing screen.
  • The PImage.set(xpos, ypos, color) function sets the pixel at location xpos ypos to the color that is passed as the third argument.
  • The red(color) (green() and blue()) functions determines how of that color (red, green or blue) is included in that pixels.

Loading and Displaying Videos
Processing handles videos in the same way that it handles images (it actually views videos as a bunch of standalone images. Using live video in processing is done through the Capture class. To use this class you need to add the video library to your sketch. Note that you need to use a different class to include movies, which are pre-recorded videos. Unfortunately, Processing does a bad job at handling movies though it works well with live video feeds.

To declare a Capture object you need to use the standard “new” object declaration: “myVideo = new Capture(this, xpos, ypos, refresh rate)”. When you initiate a video object processing will default to using the default system camera. The name of an alternate video source can be added between the ypos and refresh rate variables.

Before you display an image you need to read the image using the myVideo.read() function. The same image() that is used for PImage objects will display images from the camera onto the screen. All of the image processing functions discussed above can also be used to analyze video input.

No comments: