Project: 2D Object Transfer and Video Compositing


In this project, we have developed a unified framework for realistically compositing scenes from multiple videos or images directly in the 2D image domain. Our approach is flexible in that it allows for compositing of video streams or images without any prior knowledge of the geometry or the lighting, in both source and target scenes. This flexibility also allows for inserting virtual objects in a realistic fashion. We combine this work with our shadow and reflection synthesis techniques to provide an image-based direct lighting in the composition process for generating realistic rendering including shadows and reflections, with the color characteristics learnt from the target scene. Various direct lighting scenarios for daylight and night time are demonstrated.

Keywords: Video transfer, Video Cut-and-Paste, Video-based augmented reality.


  • Xiaochun Cao, Jiangjian Xiao, and Hassan Foroosh, A New Framework for Video Cut and Paste, Proc. of International Multimedia Modeling Conference, 2006.
  • Xiaochun Cao, Yuping Shen, Mubarak Shah, and Hassan Foroosh, Single View Compositing with Shadows, The Visual Computer, Volume 21, Numbers 8-10, pages 639-648, 2005.

How does it work? 

Top: One frame of the movie Sleepless in Seattle (1993). The extracted feature lines are plotted by five different colors: red, green and blue lines are along X, Y, and Z directions respectively in 3D world coordinate system, cyan lines are shadow lines of some fence, and magenta lines are along the light source direction. Bottom: The recovered 3D scene, where the yellow square pyramid on the right indicates the camera location that captured the scene.


Source from Sleepless in Seattle
Source from a hand-held camcorder
Composited video
More Examples: Source and composited videos