<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://strattonbrazil.com/wiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://strattonbrazil.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Strattonbrazil</id>
		<title>strattonbrazil.com - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://strattonbrazil.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Strattonbrazil"/>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Special:Contributions/Strattonbrazil"/>
		<updated>2026-04-29T06:28:06Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.23.1</generator>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown</id>
		<title>Roger Gracie BJJ Competion Breakdown</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown"/>
				<updated>2020-11-05T07:47:31Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This is a breakdown of every Roger Gracie Brazilian jiu jitsu competition available on Youtube. Roger is one of the most renowned and successful jiu jitsu competitors in the world. He is known for using simple, fundamental moves to secure victories, and his techniques are worth closer study. &lt;br /&gt;
&lt;br /&gt;
This is a work in progress. I'll add videos over time. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;FEEDBACK&amp;lt;/b&amp;gt;: If you notice a mislabeled move, missing technique or any other mistake you can submit feedback through [https://forms.gle/wFo6rGKbpbrytXNS9 this form].&lt;br /&gt;
&lt;br /&gt;
== Roger Gracie vs Marcus Almeida (Buchecha) - 2017 ==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|_L-Ni7bFAHg}}&lt;br /&gt;
* Buchecha shoots for several inside single legs&lt;br /&gt;
* Roger pulls guard at ~8:00&lt;br /&gt;
* Roger gets cross-arm control (possible setup for hip bump sweep) and moves to the side ~8:17&lt;br /&gt;
* Buchecha turns into him, Roger traps Bechecha's leg ~8:19&lt;br /&gt;
* Buchecha turns away and inverts closing guard on Roger ~8:22&lt;br /&gt;
* Buchecha opens legs attempting a sweep Roger--Roger pressures down and walks around back ~8:28&lt;br /&gt;
* Buchecha attempts to stand up, but Roger puts in hooks and takes him to the ground ~8:30&lt;br /&gt;
* Roger grabs the cross lapel grips ~8:40&lt;br /&gt;
* Roger submits Buchecha with a cross-lapel choke from back ~8:55&lt;br /&gt;
&lt;br /&gt;
Buchecha is a world-class jiu jitsu champion with an especially effective single-leg takedown. For much of the beginning Roger is using his head position to block Buchecha from getting close and lifting Bucheda's belt to keep him from getting under Roger.&lt;br /&gt;
&lt;br /&gt;
== Roger Gracie vs Rodrigo Medeiros (Comprido) ==&lt;br /&gt;
{{#ev:youtube|gDw_pDXq4dc}}&lt;br /&gt;
* Comprido reaches for a single leg ~2:10&lt;br /&gt;
* Roger secures far arm to defend attacks on other leg&lt;br /&gt;
* Comprido goes for a takedown putting weight and direction against Roger ~2:51&lt;br /&gt;
* Roger gets hooks in and and wrist control ~2:51&lt;br /&gt;
* Roger transitions to full guard ~2:59&lt;br /&gt;
* Roger pulls Comprido's arm across his chest and reaches for around his back to grab his far lapel, sweeps at 4:16&lt;br /&gt;
** this is a common sweep used by Roger -- if you know the name of this sweep please message me using the form above&lt;br /&gt;
* Comprido rolls to his side and Roger takes his back with one hook in ~4:31&lt;br /&gt;
* Roger grabs inside wrist control ~4:50&lt;br /&gt;
* Roger gives it up and starts searching for lapels ~4:58&lt;br /&gt;
* Roger transitions to S-mount searching for near arm ~6:20&lt;br /&gt;
* Roger secures far arm ~6:30&lt;br /&gt;
* Roger transitions to near-side armbar, Comprido taps at 6:49&lt;br /&gt;
&lt;br /&gt;
A relatively quick match, Comprido uses a single-leg, which remains a very popular and effective takedown even at the top levels of jiu jitsu. Roger transitions to full guard as he often does in competition and uses a very basic, but well-executed sweep to gain mount and work to an armbar. Note how patient and deliberate Roger is moving between positions leaving little opportunity to slip out.&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown</id>
		<title>Roger Gracie BJJ Competion Breakdown</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown"/>
				<updated>2020-07-18T19:19:24Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Roger Gracie vs Buchecha - 2017 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This is a breakdown of every Roger Gracie Brazilian jiu jitsu competition available on Youtube. Roger is one of the most renowned and successful jiu jitsu competitors in the world. He is known for using simple, fundamental moves to secure victories, and his techniques are worth closer study. &lt;br /&gt;
&lt;br /&gt;
This is a work in progress. I'll add videos over time. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;FEEDBACK&amp;lt;/b&amp;gt;: If you notice a mislabeled move, missing technique or any other mistake you can submit feedback through [https://forms.gle/wFo6rGKbpbrytXNS9 this form].&lt;br /&gt;
&lt;br /&gt;
== Roger Gracie vs Buchecha - 2017 ==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|_L-Ni7bFAHg}}&lt;br /&gt;
* Buchecha shoots for several inside single legs&lt;br /&gt;
* Roger pulls guard at ~8:00&lt;br /&gt;
* Roger gets cross-arm control (possible setup for hip bump sweep) and moves to the side ~8:17&lt;br /&gt;
* Buchecha turns into him, Roger traps Bechecha's leg ~8:19&lt;br /&gt;
* Buchecha turns away and inverts closing guard on Roger ~8:22&lt;br /&gt;
* Buchecha opens legs attempting a sweep Roger--Roger pressures down and walks around back ~8:28&lt;br /&gt;
* Buchecha attempts to stand up, but Roger puts in hooks and takes him to the ground ~8:30&lt;br /&gt;
* Roger grabs the cross lapel grips ~8:40&lt;br /&gt;
* Roger submits Buchecha with a cross-lapel choke from back ~8:55&lt;br /&gt;
&lt;br /&gt;
Buchecha is a world-class jiu jitsu champion with an especially effective single-leg takedown. For much of the beginning Roger is using his head position to block Buchecha from getting close and lifting Bucheda's belt to keep him from getting under Roger.&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown</id>
		<title>Roger Gracie BJJ Competion Breakdown</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown"/>
				<updated>2020-07-18T19:10:56Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Roger Gracie vs Buchecha - 2017 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This is a breakdown of every Roger Gracie Brazilian jiu jitsu competition available on Youtube. Roger is one of the most renowned and successful jiu jitsu competitors in the world. He is known for using simple, fundamental moves to secure victories, and his techniques are worth closer study. &lt;br /&gt;
&lt;br /&gt;
This is a work in progress. I'll add videos over time. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;FEEDBACK&amp;lt;/b&amp;gt;: If you notice a mislabeled move, missing technique or any other mistake you can submit feedback through [https://forms.gle/wFo6rGKbpbrytXNS9 this form].&lt;br /&gt;
&lt;br /&gt;
== Roger Gracie vs Buchecha - 2017 ==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|_L-Ni7bFAHg}}&lt;br /&gt;
* Buchecha shoots for several inside single legs&lt;br /&gt;
* Roger pulls guard at ~8:00&lt;br /&gt;
* Roger gets cross-arm control (possible setup for hip bump sweep) and moves to the side ~8:17&lt;br /&gt;
* Buchecha turns into him, Roger traps Bechecha's leg ~8:19&lt;br /&gt;
* Buchecha turns away and inverts closing guard on Roger ~8:22&lt;br /&gt;
* Buchecha opens legs attempting a sweep Roger--Roger pressures down and walks around back ~8:28&lt;br /&gt;
* Buchecha attempts to stand up, but Roger puts in hooks and takes him to the ground ~8:30&lt;br /&gt;
* Roger grabs the cross lapel grips ~8:40&lt;br /&gt;
* Roger submits Buchecha with a cross-lapel choke from back ~8:55&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown</id>
		<title>Roger Gracie BJJ Competion Breakdown</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown"/>
				<updated>2020-07-18T16:49:35Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This is a breakdown of every Roger Gracie Brazilian jiu jitsu competition available on Youtube. Roger is one of the most renowned and successful jiu jitsu competitors in the world. He is known for using simple, fundamental moves to secure victories, and his techniques are worth closer study. &lt;br /&gt;
&lt;br /&gt;
This is a work in progress. I'll add videos over time. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;FEEDBACK&amp;lt;/b&amp;gt;: If you notice a mislabeled move, missing technique or any other mistake you can submit feedback through [https://forms.gle/wFo6rGKbpbrytXNS9 this form].&lt;br /&gt;
&lt;br /&gt;
== Roger Gracie vs Buchecha - 2017 ==&lt;br /&gt;
&lt;br /&gt;
{{#ev:youtube|_L-Ni7bFAHg}}&lt;br /&gt;
* Buchecha shoots for several inside single legs&lt;br /&gt;
* Roger pulls guard at ~8:00&lt;br /&gt;
* Roger gets cross-arm control (possible setup for hip bump sweep) and moves to the side ~8:17&lt;br /&gt;
* Buchecha turns into him, Roger traps Bechecha's leg ~8:19&lt;br /&gt;
* Buchecha turns away and inverts closing guard on Roger ~8:22&lt;br /&gt;
* Buchecha opens legs attempting a sweep Roger--Roger pressures down and walks around back ~8:28&lt;br /&gt;
* Buchecha attempts to stand up, but Roger puts in hooks and takes him to the ground ~8:30&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown</id>
		<title>Roger Gracie BJJ Competion Breakdown</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown"/>
				<updated>2020-07-18T06:10:11Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This is a breakdown of every move used by Roger Gracie in Brazilian jiu jitsu competition. Roger is one of the most renowned and successful jiu jitsu competitors in the world. He is known for using simple, fundamental moves to secure victories, and his techniques are worth closer study. &lt;br /&gt;
&lt;br /&gt;
This is a work in progress. I'll add videos over time. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;FEEDBACK&amp;lt;/b&amp;gt;: If you notice a mislabeled move, missing technique or any other mistake you can submit feedback through [https://forms.gle/wFo6rGKbpbrytXNS9 this form].&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown</id>
		<title>Roger Gracie BJJ Competion Breakdown</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Competion_Breakdown"/>
				<updated>2020-07-18T06:01:55Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: Created page with &amp;quot;== Summary == This is a breakdown of every move used by Roger Gracie in Brazilian jiu jitsu competition. Roger is one of the most renowned and successful jiu jitsu competitors...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
This is a breakdown of every move used by Roger Gracie in Brazilian jiu jitsu competition. Roger is one of the most renowned and successful jiu jitsu competitors in the world. He is known for using simple, fundamental moves to secure victories, and his techniques are worth closer study. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;This is a work in progress. If you notice a mislabeled move or any other mistake&amp;lt;/span&amp;gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Main_Page"/>
				<updated>2020-07-18T05:56:12Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Software Development Hiring ==&lt;br /&gt;
&lt;br /&gt;
* [[Building Your Resume]]&lt;br /&gt;
* [[Interviewing a Technical Candidate]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Using OpenID]] - why most sites should use OpenID instead of usernames and passwords&lt;br /&gt;
* [[Computer Graphics]]&lt;br /&gt;
* [[Writing Tips]] - my collection of useful writing tips, videos, and reviews&lt;br /&gt;
* [[Juniper Pulse on 64-bit Linux]]&lt;br /&gt;
* [[Roger Gracie BJJ Competion Breakdown]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.github.com/strattonbrazil My Github Page]&lt;br /&gt;
* [http://www.dotnet-tricks.com/Tutorial/designpatterns/2FMM060314-Understanding-MVC,-MVP-and-MVVM-Design-Patterns.html Understanding MVC, MVP, and MVVM Design Patterns] - I care extremely little for the whole MVC/not-MVC arguments, but I found this article helpful for those interested&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Footage</id>
		<title>Roger Gracie BJJ Footage</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Roger_Gracie_BJJ_Footage"/>
				<updated>2020-07-15T05:06:49Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: Created page with &amp;quot;&amp;lt;youtube&amp;gt;_L-Ni7bFAHg&amp;lt;/youtube&amp;gt;&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;youtube&amp;gt;_L-Ni7bFAHg&amp;lt;/youtube&amp;gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Main_Page"/>
				<updated>2020-07-15T05:05:34Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Software Development Hiring ==&lt;br /&gt;
&lt;br /&gt;
* [[Building Your Resume]]&lt;br /&gt;
* [[Interviewing a Technical Candidate]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Using OpenID]] - why most sites should use OpenID instead of usernames and passwords&lt;br /&gt;
* [[Computer Graphics]]&lt;br /&gt;
* [[Writing Tips]] - my collection of useful writing tips, videos, and reviews&lt;br /&gt;
* [[Juniper Pulse on 64-bit Linux]]&lt;br /&gt;
* [[Roger Gracie BJJ Footage]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.github.com/strattonbrazil My Github Page]&lt;br /&gt;
* [http://www.dotnet-tricks.com/Tutorial/designpatterns/2FMM060314-Understanding-MVC,-MVP-and-MVVM-Design-Patterns.html Understanding MVC, MVP, and MVVM Design Patterns] - I care extremely little for the whole MVC/not-MVC arguments, but I found this article helpful for those interested&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Main_Page"/>
				<updated>2020-07-15T05:04:42Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Miscellaneous */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Software Development Hiring ==&lt;br /&gt;
&lt;br /&gt;
* [[Building Your Resume]]&lt;br /&gt;
* [[Interviewing a Technical Candidate]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Using OpenID]] - why most sites should use OpenID instead of usernames and passwords&lt;br /&gt;
* [[Computer Graphics]]&lt;br /&gt;
* [[Writing Tips]] - my collection of useful writing tips, videos, and reviews&lt;br /&gt;
* [[Juniper Pulse on 64-bit Linux]]&lt;br /&gt;
* [[Roger Gracie BJJ]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.github.com/strattonbrazil My Github Page]&lt;br /&gt;
* [http://www.dotnet-tricks.com/Tutorial/designpatterns/2FMM060314-Understanding-MVC,-MVP-and-MVVM-Design-Patterns.html Understanding MVC, MVP, and MVVM Design Patterns] - I care extremely little for the whole MVC/not-MVC arguments, but I found this article helpful for those interested&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Building_Your_Resume</id>
		<title>Building Your Resume</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Building_Your_Resume"/>
				<updated>2020-07-06T17:19:01Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Formatting ==&lt;br /&gt;
Make sure the resume is cleanly formatted into sections. Most resumes should use the following organization:&lt;br /&gt;
* header including your name and contact information&lt;br /&gt;
* experience (ordered most recent to least recent)&lt;br /&gt;
* education&lt;br /&gt;
* skills&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Building_Your_Resume</id>
		<title>Building Your Resume</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Building_Your_Resume"/>
				<updated>2020-07-06T17:18:35Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Formatting ==&lt;br /&gt;
Make sure the resume is cleanly formatted into sections. Most resumes should use the following organization:&lt;br /&gt;
* header including your name and contact information&lt;br /&gt;
* experience (typically most recent to least recent)&lt;br /&gt;
* education&lt;br /&gt;
* skills&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Building_Your_Resume</id>
		<title>Building Your Resume</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Building_Your_Resume"/>
				<updated>2020-07-06T17:18:04Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: Created page with &amp;quot;== Formatting == Make sure the resume is cleanly formatted into sections. Most resumes use the following progression: * header including your name and contact information * ex...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Formatting ==&lt;br /&gt;
Make sure the resume is cleanly formatted into sections. Most resumes use the following progression:&lt;br /&gt;
* header including your name and contact information&lt;br /&gt;
* experience (typically most recent to least recent)&lt;br /&gt;
* education&lt;br /&gt;
* skills&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Main_Page"/>
				<updated>2020-07-06T17:14:15Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Pages */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Software Development Hiring ==&lt;br /&gt;
&lt;br /&gt;
* [[Building Your Resume]]&lt;br /&gt;
* [[Interviewing a Technical Candidate]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Using OpenID]] - why most sites should use OpenID instead of usernames and passwords&lt;br /&gt;
* [[Computer Graphics]]&lt;br /&gt;
* [[Writing Tips]] - my collection of useful writing tips, videos, and reviews&lt;br /&gt;
* [[Juniper Pulse on 64-bit Linux]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.github.com/strattonbrazil My Github Page]&lt;br /&gt;
* [http://www.dotnet-tricks.com/Tutorial/designpatterns/2FMM060314-Understanding-MVC,-MVP-and-MVVM-Design-Patterns.html Understanding MVC, MVP, and MVVM Design Patterns] - I care extremely little for the whole MVC/not-MVC arguments, but I found this article helpful for those interested&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Juniper_Pulse_on_64-bit_Linux</id>
		<title>Juniper Pulse on 64-bit Linux</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Juniper_Pulse_on_64-bit_Linux"/>
				<updated>2017-01-17T17:57:29Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: Created page with &amp;quot;Juniper hasn't done a very good job of supporting 64-bit Linux.  They've only provided 32-bit binaries, which has necessitated several complex workarounds.  With the release o...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Juniper hasn't done a very good job of supporting 64-bit Linux.  They've only provided 32-bit binaries, which has necessitated several complex workarounds.  With the release of their Pulse client, it is now much easier, but still a little cumbersome.  &lt;br /&gt;
&lt;br /&gt;
== Download the Client ==&lt;br /&gt;
&lt;br /&gt;
The Linux client doesn't seem to be listed on the software list (only Windows and Mac OS X), so I would recommend just googling for &amp;quot;Juniper Linux Pulse Client&amp;quot; (such as this [https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB40126 link]).  &lt;br /&gt;
&lt;br /&gt;
Follow the whole install process so you eventually have the client installed in /usr/local/pulse.  &lt;br /&gt;
&lt;br /&gt;
== Added a startup script ==&lt;br /&gt;
&lt;br /&gt;
Running the application directly will raise a path exception because the lib that comes with the library isn't on the path.  &lt;br /&gt;
&lt;br /&gt;
    #!/bin/bash&lt;br /&gt;
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/pulse /usr/local/pulse/pulseUi&lt;br /&gt;
&lt;br /&gt;
== Added required 32-bit libs ==&lt;br /&gt;
&lt;br /&gt;
At this point, running the client will probably raise some missing library errors.  The 32-bit libraries need to be installed.  For example:&lt;br /&gt;
&lt;br /&gt;
    sudo apt-get install libwebkitgtk-1.0-0:i386&lt;br /&gt;
&lt;br /&gt;
Will get rid of the following error:&lt;br /&gt;
&lt;br /&gt;
    /usr/local/pulse/pulseUi: error while loading shared libraries: libwebkitgtk-1.0.so.0: cannot open shared object file: No such file or directory&lt;br /&gt;
&lt;br /&gt;
== Running the Client ==&lt;br /&gt;
&lt;br /&gt;
At this point, you should be able to launch the client through the bash script above and successfully connect to the VPN.&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Main_Page"/>
				<updated>2017-01-17T17:35:07Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Pages */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Using OpenID]] - why most sites should use OpenID instead of usernames and passwords&lt;br /&gt;
* [[Interviewing a Technical Candidate]]&lt;br /&gt;
* [[Computer Graphics]]&lt;br /&gt;
* [[Writing Tips]] - my collection of useful writing tips, videos, and reviews&lt;br /&gt;
* [[Juniper Pulse on 64-bit Linux]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.github.com/strattonbrazil My Github Page]&lt;br /&gt;
* [http://www.dotnet-tricks.com/Tutorial/designpatterns/2FMM060314-Understanding-MVC,-MVP-and-MVVM-Design-Patterns.html Understanding MVC, MVP, and MVVM Design Patterns] - I care extremely little for the whole MVC/not-MVC arguments, but I found this article helpful for those interested&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Main_Page"/>
				<updated>2015-01-23T16:01:13Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Using OpenID]] - why most sites should use OpenID instead of usernames and passwords&lt;br /&gt;
* [[Interviewing a Technical Candidate]]&lt;br /&gt;
* [[Computer Graphics]]&lt;br /&gt;
* [[Writing Tips]] - my collection of useful writing tips, videos, and reviews&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.github.com/strattonbrazil My Github Page]&lt;br /&gt;
* [http://www.dotnet-tricks.com/Tutorial/designpatterns/2FMM060314-Understanding-MVC,-MVP-and-MVVM-Design-Patterns.html Understanding MVC, MVP, and MVVM Design Patterns] - I care extremely little for the whole MVC/not-MVC arguments, but I found this article helpful for those interested&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_mask.png</id>
		<title>File:Im synth p14 mask.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_mask.png"/>
				<updated>2015-01-18T07:33:26Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_combo.png</id>
		<title>File:Im synth p14 combo.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_combo.png"/>
				<updated>2015-01-18T07:33:09Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_rust.png</id>
		<title>File:Im synth p14 rust.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_rust.png"/>
				<updated>2015-01-18T07:32:40Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_hg.png</id>
		<title>File:Im synth p14 hg.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_hg.png"/>
				<updated>2015-01-18T07:32:29Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:31:47Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Project 14 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;br /&gt;
&lt;br /&gt;
== Project 11 ==&lt;br /&gt;
&lt;br /&gt;
In Project 11, subsurface scattering is added to the renderer. This implies that light hitting the surface of the material enters and bounces around inside the medium before exiting. Many materials such as grapes, skin and marble exhibit this quality. Subsurface scattering in this implementation used ray marching, where the ray enters the medium and bounces around until exiting. Because the ray actually bounced around inside the object instead of just off the surface, the rendering was far more computationally expensive than rendering this scene without SSC. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p11_scatter.png|frame|center|Two spheres are placed side by side with a large rectangular light source placed overhead. The sphere on the left uses subsurface scattering while the sphere on the right uses just Lambertian reflections. Light shining from above simply bounces of the top of the Lambertian sphere, leaving the bottom dark and unilluminated. The SSC sphere, however, has light shining through the medium and appearing at the bottom of the sphere as many materials would]]&lt;br /&gt;
&lt;br /&gt;
== Project 12 ==&lt;br /&gt;
&lt;br /&gt;
In Project 12, the Heyney-Greenstein Phase Function was implemented in the renderer. The Heyney-Greenstein Phase Function (HGPF) is an imperical formula to simulate diffuse and specular reflection for a variety of materials using only two parameters. This function uses these these two parameters stored in each object and takes the incidence angle as input. This provides a much greater flexibility for different materials than using the simpler Lambertian reflection. &lt;br /&gt;
&lt;br /&gt;
Finding little data as to proper values, I rendered an image using a wide range of values. From the image generated, it appears a high g-value seems important to produce a good image. This component supposedly relates directly to the angle where most of the light is leaving. It seems natural that a g-value closer to 1 will produce better pictures where light bounces off at a 90-degree angle, while a 0 value gives very splotchy results. The w-value used in the function scales the function and seems to have less of an effect after tone mapping is appled. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p12_hg.png|frame|center|64 spheres are rendered. All spheres share the same color component, but have differing w and g parameters in the HGPF. From left to right, the g-component ranges from zero to one. From top to bottom the w-component ranges from zero to one]]&lt;br /&gt;
&lt;br /&gt;
== Project 14 ==&lt;br /&gt;
&lt;br /&gt;
In Project 14, for my &amp;quot;cool effect&amp;quot; I implemented shade trees. Shade trees provide a procedural, modular workflow for determining a color at a given point based on various parameters similar to the ones needed in the Heyney-Greenstein Phase Function. &lt;br /&gt;
&lt;br /&gt;
These shade trees allow simple modules to be built and combined in chains or branches. Some common shade trees provide basic shading effects like Phong shading, Lambertian shading, anisotropic shading, ramp shading, etc. Other modules allow combinations and filters to provide more complex images using these simple modules. &lt;br /&gt;
&lt;br /&gt;
Below are a few different spheres using some shading modules...&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_phong.png|frame|center|A sphere shaded using the Heyney-Greenstein module]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_ramp.png|frame|center|A sphere shaded using a ramp module. This module allows the colors to be colored based on certain points of interest]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_textured.png|frame|center|A sphere shaded using a texture module, where the theta and phi of the sphere are mapped from [0,1]x[0,1] on the texture]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_striped.png|frame|center|Here, a layered filter module with two shaders connected to it. It also uses another texture shader as a mask to determine which of these to shaders to use]]&lt;br /&gt;
&lt;br /&gt;
These shade trees scale to as many levels as the effect requires. The final image at the bottom of this post shows an image using several modules. A Heyney-Greenstein module provides a shiny metallic surface. Another is a texture module to provide the metal surface. These modules must be combined in a combo filter separately from the rust so that the rust doesn't appear shiny. This combo filter is fed into a mask filter with the rust as it's other shader. A texture module is fed into the mask filter to use as the mask. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_metal.png|frame|center|The sphere with just the metal texture]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_hg.png|frame|center|The sphere using the Heyney-Greenstein module]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_rust.png|frame|center|The sphere with just the rust texture]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_mask.png|frame|center|The sphere using the mask texture]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_combo.png|frame|center|The final sphere using the entire shade tree described above. This gives the overall feel of the metal, while splotching rust on parts in a natural-looking splat]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_metal.png</id>
		<title>File:Im synth p14 metal.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_metal.png"/>
				<updated>2015-01-18T07:31:30Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_striped.png</id>
		<title>File:Im synth p14 striped.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_striped.png"/>
				<updated>2015-01-18T07:31:14Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_textured.png</id>
		<title>File:Im synth p14 textured.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_textured.png"/>
				<updated>2015-01-18T07:30:57Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_ramp.png</id>
		<title>File:Im synth p14 ramp.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_ramp.png"/>
				<updated>2015-01-18T07:30:43Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_phong.png</id>
		<title>File:Im synth p14 phong.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p14_phong.png"/>
				<updated>2015-01-18T07:30:28Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:30:16Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Project 14 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;br /&gt;
&lt;br /&gt;
== Project 11 ==&lt;br /&gt;
&lt;br /&gt;
In Project 11, subsurface scattering is added to the renderer. This implies that light hitting the surface of the material enters and bounces around inside the medium before exiting. Many materials such as grapes, skin and marble exhibit this quality. Subsurface scattering in this implementation used ray marching, where the ray enters the medium and bounces around until exiting. Because the ray actually bounced around inside the object instead of just off the surface, the rendering was far more computationally expensive than rendering this scene without SSC. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p11_scatter.png|frame|center|Two spheres are placed side by side with a large rectangular light source placed overhead. The sphere on the left uses subsurface scattering while the sphere on the right uses just Lambertian reflections. Light shining from above simply bounces of the top of the Lambertian sphere, leaving the bottom dark and unilluminated. The SSC sphere, however, has light shining through the medium and appearing at the bottom of the sphere as many materials would]]&lt;br /&gt;
&lt;br /&gt;
== Project 12 ==&lt;br /&gt;
&lt;br /&gt;
In Project 12, the Heyney-Greenstein Phase Function was implemented in the renderer. The Heyney-Greenstein Phase Function (HGPF) is an imperical formula to simulate diffuse and specular reflection for a variety of materials using only two parameters. This function uses these these two parameters stored in each object and takes the incidence angle as input. This provides a much greater flexibility for different materials than using the simpler Lambertian reflection. &lt;br /&gt;
&lt;br /&gt;
Finding little data as to proper values, I rendered an image using a wide range of values. From the image generated, it appears a high g-value seems important to produce a good image. This component supposedly relates directly to the angle where most of the light is leaving. It seems natural that a g-value closer to 1 will produce better pictures where light bounces off at a 90-degree angle, while a 0 value gives very splotchy results. The w-value used in the function scales the function and seems to have less of an effect after tone mapping is appled. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p12_hg.png|frame|center|64 spheres are rendered. All spheres share the same color component, but have differing w and g parameters in the HGPF. From left to right, the g-component ranges from zero to one. From top to bottom the w-component ranges from zero to one]]&lt;br /&gt;
&lt;br /&gt;
== Project 14 ==&lt;br /&gt;
&lt;br /&gt;
In Project 14, for my &amp;quot;cool effect&amp;quot; I implemented shade trees. Shade trees provide a procedural, modular workflow for determining a color at a given point based on various parameters similar to the ones needed in the Heyney-Greenstein Phase Function. &lt;br /&gt;
&lt;br /&gt;
These shade trees allow simple modules to be built and combined in chains or branches. Some common shade trees provide basic shading effects like Phong shading, Lambertian shading, anisotropic shading, ramp shading, etc. Other modules allow combinations and filters to provide more complex images using these simple modules. &lt;br /&gt;
&lt;br /&gt;
Below are a few different spheres using some shading modules...&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_phong.png|frame|center|A sphere shaded using the Heyney-Greenstein module]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_ramp.png|frame|center|A sphere shaded using a ramp module. This module allows the colors to be colored based on certain points of interest]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_textured.png|frame|center|A sphere shaded using a texture module, where the theta and phi of the sphere are mapped from [0,1]x[0,1] on the texture]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_striped.png|frame|center|Here, a layered filter module with two shaders connected to it. It also uses another texture shader as a mask to determine which of these to shaders to use]]&lt;br /&gt;
&lt;br /&gt;
These shade trees scale to as many levels as the effect requires. The final image at the bottom of this post shows an image using several modules. A Heyney-Greenstein module provides a shiny metallic surface. Another is a texture module to provide the metal surface. These modules must be combined in a combo filter separately from the rust so that the rust doesn't appear shiny. This combo filter is fed into a mask filter with the rust as it's other shader. A texture module is fed into the mask filter to use as the mask. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_metal.png|frame|center|The sphere with just the metal texture]]&lt;br /&gt;
&lt;br /&gt;
[File:im_synth_p14_hg.png|frame|center|The sphere using the Heyney-Greenstein module]]&lt;br /&gt;
&lt;br /&gt;
[File:im_synth_p14_rust.png|frame|center|The sphere with just the rust texture]]&lt;br /&gt;
&lt;br /&gt;
[File:im_synth_p14_mask.png|frame|center|The sphere using the mask texture]]&lt;br /&gt;
&lt;br /&gt;
[File:im_synth_p14_combo.png|frame|center|The final sphere using the entire shade tree described above. This gives the overall feel of the metal, while splotching rust on parts in a natural-looking splat]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:27:53Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;br /&gt;
&lt;br /&gt;
== Project 11 ==&lt;br /&gt;
&lt;br /&gt;
In Project 11, subsurface scattering is added to the renderer. This implies that light hitting the surface of the material enters and bounces around inside the medium before exiting. Many materials such as grapes, skin and marble exhibit this quality. Subsurface scattering in this implementation used ray marching, where the ray enters the medium and bounces around until exiting. Because the ray actually bounced around inside the object instead of just off the surface, the rendering was far more computationally expensive than rendering this scene without SSC. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p11_scatter.png|frame|center|Two spheres are placed side by side with a large rectangular light source placed overhead. The sphere on the left uses subsurface scattering while the sphere on the right uses just Lambertian reflections. Light shining from above simply bounces of the top of the Lambertian sphere, leaving the bottom dark and unilluminated. The SSC sphere, however, has light shining through the medium and appearing at the bottom of the sphere as many materials would]]&lt;br /&gt;
&lt;br /&gt;
== Project 12 ==&lt;br /&gt;
&lt;br /&gt;
In Project 12, the Heyney-Greenstein Phase Function was implemented in the renderer. The Heyney-Greenstein Phase Function (HGPF) is an imperical formula to simulate diffuse and specular reflection for a variety of materials using only two parameters. This function uses these these two parameters stored in each object and takes the incidence angle as input. This provides a much greater flexibility for different materials than using the simpler Lambertian reflection. &lt;br /&gt;
&lt;br /&gt;
Finding little data as to proper values, I rendered an image using a wide range of values. From the image generated, it appears a high g-value seems important to produce a good image. This component supposedly relates directly to the angle where most of the light is leaving. It seems natural that a g-value closer to 1 will produce better pictures where light bounces off at a 90-degree angle, while a 0 value gives very splotchy results. The w-value used in the function scales the function and seems to have less of an effect after tone mapping is appled. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p12_hg.png|frame|center|64 spheres are rendered. All spheres share the same color component, but have differing w and g parameters in the HGPF. From left to right, the g-component ranges from zero to one. From top to bottom the w-component ranges from zero to one]]&lt;br /&gt;
&lt;br /&gt;
== Project 14 ==&lt;br /&gt;
&lt;br /&gt;
In Project 14, for my &amp;quot;cool effect&amp;quot; I implemented shade trees. Shade trees provide a procedural, modular workflow for determining a color at a given point based on various parameters similar to the ones needed in the Heyney-Greenstein Phase Function. &lt;br /&gt;
&lt;br /&gt;
These shade trees allow simple modules to be built and combined in chains or branches. Some common shade trees provide basic shading effects like Phong shading, Lambertian shading, anisotropic shading, ramp shading, etc. Other modules allow combinations and filters to provide more complex images using these simple modules. &lt;br /&gt;
&lt;br /&gt;
Below are a few different spheres using some shading modules...&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_phong.png|frame|center|A sphere shaded using the Heyney-Greenstein module]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_ramp.png|frame|center|A sphere shaded using a ramp module. This module allows the colors to be colored based on certain points of interest]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_textured.png|frame|center|A sphere shaded using a texture module, where the theta and phi of the sphere are mapped from [0,1]x[0,1] on the texture]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_striped.png|frame|center|Here, a layered filter module with two shaders connected to it. It also uses another texture shader as a mask to determine which of these to shaders to use]]&lt;br /&gt;
&lt;br /&gt;
These shade trees scale to as many levels as the effect requires. The final image at the bottom of this post shows an image using several modules. A Heyney-Greenstein module provides a shiny metallic surface. Another is a texture module to provide the metal surface. These modules must be combined in a combo filter separately from the rust so that the rust doesn't appear shiny. This combo filter is fed into a mask filter with the rust as it's other shader. A texture module is fed into the mask filter to use as the mask. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p14_metal.png|frame|center|The sphere with just the metal texture]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p12_hg.png</id>
		<title>File:Im synth p12 hg.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p12_hg.png"/>
				<updated>2015-01-18T07:23:57Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:23:32Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;br /&gt;
&lt;br /&gt;
== Project 11 ==&lt;br /&gt;
&lt;br /&gt;
In Project 11, subsurface scattering is added to the renderer. This implies that light hitting the surface of the material enters and bounces around inside the medium before exiting. Many materials such as grapes, skin and marble exhibit this quality. Subsurface scattering in this implementation used ray marching, where the ray enters the medium and bounces around until exiting. Because the ray actually bounced around inside the object instead of just off the surface, the rendering was far more computationally expensive than rendering this scene without SSC. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p11_scatter.png|frame|center|Two spheres are placed side by side with a large rectangular light source placed overhead. The sphere on the left uses subsurface scattering while the sphere on the right uses just Lambertian reflections. Light shining from above simply bounces of the top of the Lambertian sphere, leaving the bottom dark and unilluminated. The SSC sphere, however, has light shining through the medium and appearing at the bottom of the sphere as many materials would]]&lt;br /&gt;
&lt;br /&gt;
== Project 12 ==&lt;br /&gt;
&lt;br /&gt;
In Project 12, the Heyney-Greenstein Phase Function was implemented in the renderer. The Heyney-Greenstein Phase Function (HGPF) is an imperical formula to simulate diffuse and specular reflection for a variety of materials using only two parameters. This function uses these these two parameters stored in each object and takes the incidence angle as input. This provides a much greater flexibility for different materials than using the simpler Lambertian reflection. &lt;br /&gt;
&lt;br /&gt;
Finding little data as to proper values, I rendered an image using a wide range of values. From the image generated, it appears a high g-value seems important to produce a good image. This component supposedly relates directly to the angle where most of the light is leaving. It seems natural that a g-value closer to 1 will produce better pictures where light bounces off at a 90-degree angle, while a 0 value gives very splotchy results. The w-value used in the function scales the function and seems to have less of an effect after tone mapping is appled. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p12_hg.png|frame|center|64 spheres are rendered. All spheres share the same color component, but have differing w and g parameters in the HGPF. From left to right, the g-component ranges from zero to one. From top to bottom the w-component ranges from zero to one]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p11_scatter.png</id>
		<title>File:Im synth p11 scatter.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p11_scatter.png"/>
				<updated>2015-01-18T07:22:15Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:22:07Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Project 11 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;br /&gt;
&lt;br /&gt;
== Project 11 ==&lt;br /&gt;
&lt;br /&gt;
In Project 11, subsurface scattering is added to the renderer. This implies that light hitting the surface of the material enters and bounces around inside the medium before exiting. Many materials such as grapes, skin and marble exhibit this quality. Subsurface scattering in this implementation used ray marching, where the ray enters the medium and bounces around until exiting. Because the ray actually bounced around inside the object instead of just off the surface, the rendering was far more computationally expensive than rendering this scene without SSC. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p11_scatter.png|frame|center|Two spheres are placed side by side with a large rectangular light source placed overhead. The sphere on the left uses subsurface scattering while the sphere on the right uses just Lambertian reflections. Light shining from above simply bounces of the top of the Lambertian sphere, leaving the bottom dark and unilluminated. The SSC sphere, however, has light shining through the medium and appearing at the bottom of the sphere as many materials would]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:21:49Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;br /&gt;
&lt;br /&gt;
== Project 11 ==&lt;br /&gt;
&lt;br /&gt;
In Project 11, subsurface scattering is added to the renderer. This implies that light hitting the surface of the material enters and bounces around inside the medium before exiting. Many materials such as grapes, skin and marble exhibit this quality. Subsurface scattering in this implementation used ray marching, where the ray enters the medium and bounces around until exiting. Because the ray actually bounced around inside the object instead of just off the surface, the rendering was far more computationally expensive than rendering this scene without SSC. &lt;br /&gt;
&lt;br /&gt;
[[ile:im_synth_p11_scatter.png|frame|center|Two spheres are placed side by side with a large rectangular light source placed overhead. The sphere on the left uses subsurface scattering while the sphere on the right uses just Lambertian reflections. Light shining from above simply bounces of the top of the Lambertian sphere, leaving the bottom dark and unilluminated. The SSC sphere, however, has light shining through the medium and appearing at the bottom of the sphere as many materials would]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p10_pm.png</id>
		<title>File:Im synth p10 pm.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p10_pm.png"/>
				<updated>2015-01-18T07:20:15Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:19:50Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project 10 ==&lt;br /&gt;
&lt;br /&gt;
In Project 10, we were to implement participating media. Participating media involves computing physical bounces of light in media such as fog, dust, smoke, etc. where the light bounces around inside the volume instead of just diffusing, reflecting, or refracting. &lt;br /&gt;
&lt;br /&gt;
To implement this, I used the standard marching technique through an axis-aligned bounding box. A ray was sampled multiple times across its vector using small steps. When these steps were inside the bounding volume, they probabilisitcally hit some of the media. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p10_pm.png|frame|center|Participating media is used to simulate a gaseous volume under the cube. Right now my code is very ineffecient and took six hours to get these results. Earlier images not requiring participating media required far less time to converge]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p9_beere.png</id>
		<title>File:Im synth p9 beere.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p9_beere.png"/>
				<updated>2015-01-18T07:18:21Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:18:10Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;br /&gt;
&lt;br /&gt;
== Project 9 ==&lt;br /&gt;
&lt;br /&gt;
In Project 9, we were to add the Beere-Lambert Law to our renderer. The Beere-Lambert Law models the amount of light absorbed while traveling through a medium. Because different wavelengths are absorbed based on the distance they must travel through the medium, different colors can be absorbed causing different wavelengths to be more pronounced. Certain types of glass often absorb high and low wavelengths leaving a greenish tint at certain angles. Usually this effect can be seen when the light is going through the greatest distance of the glass. To compute the absorbtion at certain frequencies, euler's number e was taken to the exponent of the distance times a large negative constant (which changes based on the scene metrics). &lt;br /&gt;
&lt;br /&gt;
In the picture below, five spheres are modeled. The back left sphere is a reflective sphere, while the back right is a diffuse sphere. Theses spheres are only to provide background to the scene. The spheres in front are used for comparison. The left sphere is a diffuse/reflective sphere with a purplish hue. The middle sphere is a translucent sphere that implements the Beere-Lambert Law based on the distance of the medium. This gives the sphere the slightly greenish tint. The sphere on the right is also a translucent sphere, but does not implement the Beere-Lambert Law. This was to show the difference in hues generated by this principle. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p9_beere.png|frame|center|This figure shows several spheres. The front-middle sphere uses the Beere-Lambert Law to absorb certain frequencies of light. The front-right sphere has the same parameters as this sphere, except it does not implement the Beere-Lambert Law. This image took 5 hours and 24 minutes to render]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p7_cb.png</id>
		<title>File:Im synth p7 cb.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p7_cb.png"/>
				<updated>2015-01-18T07:16:08Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:15:55Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  Still it was one of the funnest college courses I had.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;br /&gt;
&lt;br /&gt;
== Project 7 and 8 ==&lt;br /&gt;
&lt;br /&gt;
In Project 7, we were to use our renderer to model the Cornell box. This box was compared against a physical model to compare accuracy in the rendering software. This scene's walls and light are modeled using the geometry data and color data from the Cornell box at http://www.graphics.cornell.edu/online/box/data.html. &lt;br /&gt;
&lt;br /&gt;
In addition to the light source and the walls, Project 8 added fresnel effects to the renderer. Instead of the two blocks found in the original Cornell box, two diffuse spheres have been placed in the far corners of the room. A large transparent sphere is placed in the center of the room, which shows the light refracting through it. &lt;br /&gt;
&lt;br /&gt;
The scene's light seems rather blurry because it was moved down slightly to not create artifacts with the ceiling, which is one giant polygon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p7_cb.png|frame|center|rendered at 400x400 and tone mapped. It took 4 hours to render, which is rather sad]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p6_is.png</id>
		<title>File:Im synth p6 is.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p6_is.png"/>
				<updated>2015-01-18T07:13:33Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:13:17Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;br /&gt;
&lt;br /&gt;
== Project 6 and 15 ==&lt;br /&gt;
&lt;br /&gt;
In Project 4 &amp;amp; 5, the imager was modified to handle importance sampling and a pinhole camera. The importance sampling provided much faster rendering times by sending photons only at known objects in the scene, and weighting them based on their angle. The photons were generated at the sensor grid and sent through the scene until they hit a light source (or bounced too much). This seemed to provide a good sampling of the scene, but required a major rewrite of the code. &lt;br /&gt;
&lt;br /&gt;
This new version of the software adds a lens to the scene as well as motion blur. To handle motion blur, the red diffuse sphere was moved from (0, 1, 3) to (1, 2, 4) over the course of the rendering. This created an easy but interesting blur effect as if the object was exposed over many sensor units while the apperture was open. &lt;br /&gt;
&lt;br /&gt;
The lense was a simple biconvex lense attached to the pinhole. Upon contact with the lense, the vector was skewed based on it's angle of incidence and the refractive index of the lense (1.4 for this lense). As the photon left this lense, it would be skewed again. Snell's law was used to calculate the angles of refraction for each photon. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p6_is.png|frame|center|Rendered using 18,000,000,000 photons -- 200,000 per sensor grid (4 hours, 11 seconds rendering time)]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p4_is.png</id>
		<title>File:Im synth p4 is.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p4_is.png"/>
				<updated>2015-01-18T07:11:33Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:11:19Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
== Project 4 and 5 ==&lt;br /&gt;
&lt;br /&gt;
In the previous project, a single sphere-shaped light emitter was placed onto the sensor grid. In the example, the light source has been moved back and a pinhole has been added between the sensor grid and all other objects in the scene. This pinhole only allows vectors of photons pass through it if it they travel inside the pinhole to reach the grid. Other photons are thrown out. This pinhole-camera scheme provides a much better detailed picture than the previous implementation as it simulates a perspective frustrum just like the eye or a real camera. A red diffuse sphere bounces light from the light source to the pinhole also. &lt;br /&gt;
&lt;br /&gt;
Included in this project was an importance sampling implementation. This fired photons at the diffuse sphere and directly at the pinhole. The amount of photons fired to each object was a ratio of their solid angle and the total number of photons previously fired in all directions. Because only photons sent to the pinhole and the object were fired, the scene was rendered using much fewer photons. Although more overhead was required to calculate the importance sampling, it greatly reduced the calculation time for every photon collision (having much fewer photons in the scene). &lt;br /&gt;
&lt;br /&gt;
Both of these projects were added to the previous code base at the same time. Importance sampling was required to provide an more interactive debugging of the pinhole code. Final images look better than the previous project, and took much less time to synthesize. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p4_is.png|frame|center|313,290,000 emissions (876,964 passed through the pinhole)]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:09:10Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: /* Project 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|frame|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|frame|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|frame|center|100,000,000 Emissions]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p3_s100000000.png</id>
		<title>File:Im synth p3 s100000000.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p3_s100000000.png"/>
				<updated>2015-01-18T07:08:52Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p3_s1000000.png</id>
		<title>File:Im synth p3 s1000000.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p3_s1000000.png"/>
				<updated>2015-01-18T07:08:35Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p3_s1000.png</id>
		<title>File:Im synth p3 s1000.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p3_s1000.png"/>
				<updated>2015-01-18T07:08:12Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis</id>
		<title>Image Synthesis</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=Image_Synthesis"/>
				<updated>2015-01-18T07:08:01Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I took this class with [http://psgraphics.blogspot.com/ Dr. Pete Shirley] in 2006.  He was a very animated professor and loved talking about light and how it behaved.  At one point I remember him mid sentence running into the kitchen connected to our classroom and coming back with a glass container he held up looking at it from different angles to talk about refraction.  &lt;br /&gt;
&lt;br /&gt;
All of the assignments were completed via blog postings.  I decided to write my renderer in java for kicks, while most people chose to use C++.  Looking back I slightly regret my decision since I had no experience writing &amp;quot;fast&amp;quot; java code and my renders took noticeably longer than my classmates.  &lt;br /&gt;
&lt;br /&gt;
== Project 1 ==&lt;br /&gt;
&lt;br /&gt;
This first project samples various frequencies on the Macbeth color checker. If it passes a basic sampling check, the image color is sampled to the frame buffer. All the samples are acculumated over each time step to get a better average color. All these images were rendered at 720x480 (they seemed like good numbers). They were rendered on a Toshiba Tecra S2. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1.png|thumb|center|1 Sample]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p1_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 2 ==&lt;br /&gt;
&lt;br /&gt;
This second project is similar to the first. I sampled XYZ estimates using the tristimulus curves and converted the samples on the graphics card to RGB using the Adobe RGB table conversion standard matrix. It took my poor little laptop almost two minutes to render 1024 time steps at 720x480. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s16.png|thumb|center|16 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s256.png|thumb|center|256 Samples]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p2_s1024.png|thumb|center|1024 Samples]]&lt;br /&gt;
&lt;br /&gt;
== Project 3 ==&lt;br /&gt;
&lt;br /&gt;
These are samples from a simulated sensor. The grid is on the xy-plane and a sphere emitting light from it's surface at random vectors hits the sensor grid and accumulates XYZ factors, which are converted to RGB on the graphics card and displayed to the screen. Below are images of this program taken at different number of photon emissions. &lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000.png|thumb|center|1,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s1000000.png|thumb|center|1,000,000 Emissions]]&lt;br /&gt;
&lt;br /&gt;
[[File:im_synth_p3_s100000000.png|thumb|center|100,000,000 Emissions]]&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	<entry>
		<id>http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p2_s1024.png</id>
		<title>File:Im synth p2 s1024.png</title>
		<link rel="alternate" type="text/html" href="http://strattonbrazil.com/wiki/index.php?title=File:Im_synth_p2_s1024.png"/>
				<updated>2015-01-18T07:05:41Z</updated>
		
		<summary type="html">&lt;p&gt;Strattonbrazil: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Strattonbrazil</name></author>	</entry>

	</feed>