Difference between revisions of "Image Synthesis"

From strattonbrazil.com
Jump to: navigation, search
Line 1: Line 1:
 
I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.   
 
I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.   
  
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing "fast" java code and my renders took noticeably longer than my classmates.   
+
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing "fast" java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.   
  
 
== Project 1 ==
 
== Project 1 ==
Line 54: Line 54:
  
 
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]
 
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]
 +
 +
== Project 7 and 8 ==
 +
 +
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html.
 +
 +
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it.
 +
 +
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon.
 +
 +
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]

Revision as of 07:15, 18 January 2015

I took this class with Dr. Pete Shirley in 2006. He was a very animated professor and loved talking about light and how it behaved. At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.

All of the assignments were completed via blog postings. I decided to write my renderer in java for kicks, while most people chose to use C++. Looking back I slightly regret my decision since I had no experience writing "fast" java code and my renders took noticeably longer than my classmates. Still it was one of the funnest college courses I had.

Project 1

This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2.

1 Sample
16 Samples
256 Samples
1024 Samples

Project 2

This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480.

16 Samples
256 Samples
1024 Samples

Project 3

These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions.

1,000 Emissions
1,000,000 Emissions
100,000,000 Emissions

Project 4 and 5

In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also.

Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene).

Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize.

313,290,000 emissions (876,964 passed through the pinhole)

Project 6 and 15

In Project 4 & 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code.

This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open.

The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon.

Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)

Project 7 and 8

In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html.

In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it.

The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon.

rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad