ClioSport.net

Register a free account today to become a member!
Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • When you purchase through links on our site, we may earn an affiliate commission. Read more here.

Anybody here into non-realtime graphics rendering?



SharkyUK

ClioSport Club Member
Yeah, definitely push on with it mate. You've obviously got a talent and the artistic eye for it. It's probably worth building up a showreel / portfolio if you aren't all ready doing so. I look forward to your future creations... ;)
 

SharkyUK

ClioSport Club Member
Due to various commitments and other projects I haven't had much time to work on my raytracer recently. However, I did produce another couple of renders. The 3D models were created in ZBrush (by professional artists, not me!) and I have simply taken them, tweaked them in Blender and then rendered them using my software. I wish I had more time to work on this at the moment :(

9529934945_ddf0f5bdb5_o.jpg

Warrior Character Render by SharkyUK, on Flickr

9529934727_80aaa72c89_o.jpg

Earthquake Character Render by SharkyUK, on Flickr
 
  04BGFF182CLIORS
Nowhere to the same level as any of this, but I dabbled in Maya/Mudbox during my final year of uni:

190048_10150121102782250_1421933_n.jpg188711_10150126503612250_680967_n.jpg167995_10150089172897250_2436658_n.jpg
(part of my 'A Modern Chernobyl' short film, based at Dungeness - was WAY to over ambitious and didnt turn out how I imagined!)

And a small bit of modeling here:

[video]https://www.facebook.com/video/embed?video_id=484868957249[/video]
 
  04BGFF182CLIORS
Cheers mate, id love to find time to get back into it, but I have so much print work I dont seem to get chance to do 'hobby' work :(

Ill take a peek at the settings on the video - its a different project, a WW2 'bunker' setting, nothing fancy just a few camera sweeps (all set by the tutor at the time)
 

Rob

ClioSport Moderator
Bloody love that clockwork rabbit. Really nice.

You could definitely spend some time developing it and making it longer, like when it zooms out, add duplicates, maybe a few different ones colour wise, then you could zoom into another for another story etc.

I miss 3D stuff, as close to it as I get now is doing bloody BIM managed buildings.

I have a video somwhere of something I did at college, but it's so poor in comparison to some stuff here I'd be embarassed to post it. That rabbit one has certainly inspired something in me though, lovely simplicity and a very clean feel to it.
 

Rob

ClioSport Moderator
I don't know if it'd interest any of you guys (a little off topic), but a good friend of mine I haven't seen in a while showed me his latest work down the pub on Friday, really impressive stuff IMO:

Kin1_zpsd9552ace.jpg


Kin2_zpse2fe714b.jpg


Kin3_zps9cb09ae2.jpg
 
  Not a 320d
I couldn't use Maya. My IQ is less than 1 million.

I used to animate s**t for CSS....Does this count?

 
Last edited by a moderator:

SharkyUK

ClioSport Club Member
Cheers mate, id love to find time to get back into it, but I have so much print work I dont seem to get chance to do 'hobby' work :(
Yeah, I know what you mean. It's not something you can just drop-in and drop-out of for a few minutes at a time! :)

I miss 3D stuff, as close to it as I get now is doing bloody BIM managed buildings. I have a video somwhere of something I did at college, but it's so poor in comparison to some stuff here I'd be embarassed to post it. That rabbit one has certainly inspired something in me though, lovely simplicity and a very clean feel to it.
Go for it, Rob - post it up. You've certainly nothing to be embarrassed about.

I don't know if it'd interest any of you guys (a little off topic), but a good friend of mine I haven't seen in a while showed me his latest work down the pub on Friday, really impressive stuff IMO:
Awesome work by your mate; he's definitely got talent. :D

I couldn't use Maya. My IQ is less than 1 million. I used to animate s**t for CSS....Does this count?
Well... this thread is about non-realtime rendering... but no worries, anything 3D graphics related and I'm interested! :D

I'm trying to sort out some screenshots from projects I've been doing in work over the years, but I have to get permission first. Would be great to share them if I can. Fingers crossed.
 

SharkyUK

ClioSport Club Member
I haven't had chance / time to work on the raytracer recently and have instead been working on a little side project that started in work some time ago. It's a realtime terrain viewer that can take various data formats and can then stream and render the geometry on the fly. It's still using DirectX9 and Shader Model 3.0 (will be moving to DirectX11 when I have the time and enthusiasm to do so). Underneath the hood it's both a forward renderer and a deferred renderer (but not at the same time).Full dynamic lighting with the usual glow/bloom and crepuscular rays and a nasty, nasty lens flare effect (which is evident in a few of the pics below). I may introduce a quick and dirty screen space ambient occlusion shader into the deferred renderer at some point as it would be relatively straightforward to do so.

Here are a few captures (these terrain sets are derived from real-world LIDAR data).

grab_20131128_140517_zpsbcf73547.jpg


grab_20131128_140655_zps98c3d7bd.jpg


grab_20131128_141122_zps2ed27814.jpg


grab_20131128_141531_zpsf19ae472.jpg


grab_20131128_141742_zps4d480f13.jpg


grab_20131128_142053_zps6580b0a2.jpg


grab_20131128_142421_zps6850295b.jpg


grab_20131128_142626_zps5470b896.jpg


grab_20131128_143903_zpsc34ce49c.jpg
 

SharkyUK

ClioSport Club Member
Ok, so it's been a while since I've had the opportunity to work on my raytracing software but - at last - I was able to spend a few hours on it tonight. I tracked down (and fixed) a few bugs, slightly changed the lighting algorithm I'm currently using and also added a little bit of framework code for conversion of colour values between different colour spaces. However, the biggest thing I implemented tonight (first pass) was camera-based depth of field. This is generated by effectively mimicking how light interplays with a real camera lens (taking into account focal length, aperture size and so forth). It's a very basic implementation at the moment and it's also quite slow. Very slow. But that's due to the fact that I'm now performing millions and millions MORE calculations per scene to realise the depth of field effect.

But - for the first attempt - I'm happy with the results. Here's a scene generated from mathematical primitives (spheres, cubes, plane and procedural texture):

13614112314_d83b1b9ee5_o.jpg


And here's the Alien model I used in previous test renders, albeit showing the depth of field effect in action (with a harsher lighting to emphasise the effect):

13613760265_37fcf7714b_o.jpg


13613762285_c996f30b95_o.jpg
 

SharkyUK

ClioSport Club Member
Did a bit more work on the virtual camera system, although there's not much to see in terms of visual improvements. Further DoF testing...

13684942255_4e32b12be0_o.jpg


And again - this time using the Crytek Sponza test model (with and without DoF enabled)...

13684983773_a5c4002a0b_o.jpg


13684944295_64aa79187a_o.jpg
 

SharkyUK

ClioSport Club Member
I decided to have a crack at approximating (crudely) global illumination in a scene - i.e. more specifically, indirect lighting contributions. I did this using a technique call ambient occlusion.

Ambient occlusion effectively represents how visible, or exposed, a given point is within a scene with respect to ambient lighting (i.e. lighting that does NOT come directly from a source such as a lamp or the sun). Simply, the more enclosed an area is within a scene, the darker it appears. Through ambient occlusion calculations we can approximate how indirect light would radiate through a scene and this results in a somewhat diffuse (non-directional) lighting throughout and soft/fuzzy undefined shadows. As mentioned, enclosed areas appear darker whilst 'open' areas are affected less. This can be seen in the third image in the sequence shown below.

Ambient occlusion is often used in modern games, although they use a screen-space method based on pixel depth to allow performance to remain optimal. Here, proper ambient occlusion is being used; whereby the illumination at a point is a function of actual geometry in the scene. This produces much better results in terms of the final rendered image, but the computational costs are staggering and simply not possible in real time.

In the example renders below I produced 4 images of a Predator model, with different settings enabled, to show how the ambient occlusion contributions can visibly affect the final rendered image.

The third image shows the results of the ambient occlusion pass and the last image shows the final rendered image with direct and indirect lighting applied, along with hard-edged shadows.

13859189693_9a68dc0cfb_o.jpg

Ambient Occlusion Test by SharkyUK, on Flickr
 

SharkyUK

ClioSport Club Member
Just another test render I did with ambient occlusion calculations enabled. This actually took 4.5 hours to render... which suggests I really need to look into improving the acceleration structures I use for the scene geometry and ray propagation at some point! The fact that the model is several million densely packed polygons doesn't help.

13909625674_e70cf7a93d_o.jpg

Test Render - Chinese Dragon by SharkyUK, on Flickr
 

SharkyUK

ClioSport Club Member
I had a go at implementing Atmospheric Scattering the other night (based on a paper from Siggraph 1993 by Nis**ta et al, "Display of The Earth Taking into Account Atmospheric Scattering"). The aim was to produce some semi-realistic sky colours that are determined by the size of the planet (Earth) and sun direction.

Atmospheric scattering is calculated by determining Rayleigh and Mie scattering components. In fact, how the sky appears is due to the combination of Rayleigh and Mie scattering. Rayleigh scattering is responsible for the blue colour of the sky (and its red-orange colour at sunrise/sunset) and is caused by light being scattered by air molecules with sizes that are much smaller than the light wavelength. Air molecules scatter blue light more than green and red light (hence the typically blue colour of the sky). Mie scattering is responsible for the whitish and hazy look of the atmosphere. This is due to light being scattered by molecules bigger than the light wavelength. These molecules are known as aerosols and they scatter light of all wavelengths equally.

Here are a couple of test renders showing the same scene with different sun directions (rendered at sea-level). The implementation is not perfect but the math is pretty heavy-going, so I think there are a few bugs in it. I am hoping to implement aerial perspective next which, in conjunction with this sky colouring, should produce some nice looking terrains.

13948075461_028bba0b4a_o.jpg

Atmospheric Scattering Test Renders by SharkyUK, on Flickr

13971231723_7d6ea21440_o.jpg

Atmospheric Scattering Test Renders by SharkyUK, on Flickr
 

SharkyUK

ClioSport Club Member
Another update to the raytracing software; this time the implementation of crude (at least in it's current form) area lighting. Area lights are computationally more expensive than point lights, spots and directional lights - but result in 'nicer' shadows with soft edges and shadow umbra and penumbra. Here's a Lego Snowspeeder model I found and used as my test subject:

13985015512_63b7de409a_o.jpg

Area Light Test by SharkyUK, on Flickr
 

SharkyUK

ClioSport Club Member
Another update to the core renderer in the raytracer which saw the implementation of a Blinn-Phong specular reflection model, a modification to the diffuse contribution calculations and a minor update to the area lighting algorithm. Here's a new render of a model I used in a previous image, albeit with the new updates and improvements in place.

14059991871_a4bd5243c8_o.jpg

Test Render of Imrod by SharkyUK, on Flickr
 

SharkyUK

ClioSport Club Member
Been working on a few new things (which are still broken) but produced a couple more renders along the way.

First this bathroom scene (which looks a bit s**t as I'm using a very limited rendering/shading model as opposed to physical based raytracing, which is where I want to take the software next). Thanks to "namicus" for the 3D modelling.

14286731655_7dfe92885c_o.jpg

Bathroom (non-pbrt) by SharkyUK, on Flickr

And the good old Millenium Falcon, flying above Coruscant. This took about 4 hours to render due to the complexity of the model and the fact I was calculating ambient occlusion as well as soft-shadowing through two opposing area lights. Quite happy with the result, though. Thanks to "KuhnIndustries" for this detailed Millenium Falcon 3D model.

14264280046_5ca2e5c4de_o.jpg

Millenium Falcon Render by SharkyUK, on Flickr
 
Last edited:

SharkyUK

ClioSport Club Member
So... I decided to rewrite part of the software so that it now works correctly (and calculates correctly) by working in linear colour space. Input textures (unless already in a linear colour format) and RGB colours specified by the user (for object colours, fog colour, etc) are now adjusted to account for gamma correction. This means the meat of the calculations can be done in linear colour space, which fits in nicely with the summed lighting techniques, and - as a final step - an inverse gamma correction can be applied to the rendered output for correct viewing on screen (or saving to a file). This is the 'proper' way to do things but it renders (pun intended) a lot of my previous scenes useless; as lighting values, etc. now have to be tweaked to better suit the linear colour space system...

A couple of test renders with the new setup.

14309007795_752495eea4_o.jpg

Crytek Sponza Test Render by SharkyUK, on Flickr

14329205463_b828f71b80_o.jpg

Lego Truck Test Render by SharkyUK, on Flickr
 

SharkyUK

ClioSport Club Member
It's been quite some time since I last wrote in this thread... busy times in and out of work mean I've had very little time to work on my raytracing software. It's only been the last few days where I've picked it up again.

I decided I wasn't happy with the shadowing, supersampling and ambient occlusion / indirect lighting results so I've spent a couple of days rewriting those. I now use a cosine weighted hemisphere sampling algorithm in the indirect lighting calculations and it produces much better results with no appreciable performance loss (pretty much negligible). Here's a montage of test renders I've made over the last couple of days.

16231845791_2c849eb405_o.jpg
Raytracer Revisited
by SharkyUK, on Flickr

16231845531_ecd42cd10b_o.jpg
Raytracer Revisited
by SharkyUK, on Flickr

16207806996_0766a3873f_o.jpg
Raytracer Revisited
by SharkyUK, on Flickr

In doing the above work I broke the camera optics and depth of field algorithms so they need fixing (re-factoring if I'm being honest). That's on the to-do list along with revisiting how light rays pass through various media. I'm not happy with how my implementation currently works so plan on altering how I handle the reflection, refraction, transmission and absorption of light rays through various media.
 
  Cayman S Edition 1
I wish I had the faintest clue (and ability) to understand that you do all that stuff. Its simply stunning.
 

SharkyUK

ClioSport Club Member
Seeing as my Trophy is f**ked again and I can't go out driving at the weekends, I've spent some more time working on my rendering software. Here are a few more screenshots. The first two screenshots simply show scenes with zero direct lighting; the 'shadows' are generated from ambient light bouncing around the scene (effectively approximating global illumination using ambient occlusion). I'm now also using a form of importance sampling to generate the sample rays used to generate the occlusion; i.e. cosine-weighted hemisphere sampling. The results are so much better. The images below use this method and are using 256 sample rays per pixel.

16169860468_ccdb7bd320_o.jpg
Test Render - Pure AO
by SharkyUK, on Flickr

16169860538_98e3bf88ae_o.jpg
Test Render - Pure AO
by SharkyUK, on Flickr

The next two scenes simply show the results of a revised reflection / refraction (transmission) algorithm and a depth-of-field algorithm that now works properly (my last implementation didn't). As I've currently broken the multithreading support in my renderer the scene with the yellow power loader in the background took almost 6 hours to render at 2560x1440.

16294266771_38b3618c0e_h.jpg
Raytracer - Reflection Test
by SharkyUK, on Flickr

15726762114_daab8110af_h.jpg
Test Render - DoF
by SharkyUK, on Flickr
 
  Cayman S Edition 1
They're just epic.

And again my head hurts trying to work out how the **** you do that.
 

SharkyUK

ClioSport Club Member
They're just epic.

And again my head hurts trying to work out how the **** you do that.
Thanks mate :) I can't claim credit for some of the 3D models I use though; some are from friends or downloaded from the Internet. My specialist area is the software that sits behind and turns those 3D worlds/models into something you see on the screen. In a nutshell I'm taking a bunch of well-established algorithms and equations (based on how the physical characteristics of light bounce around the world, how material properties affect that light, and how the human eye perceives that light) and encapsulate it in the software I'm writing; the results of which you see in the screenshots. It's also worth bearing in mind that this is a job and a passion of mine and something I've been doing for the best part of 25+ years now... 😊 ...and I still enjoy it as much now as when I started.
 

SharkyUK

ClioSport Club Member
One of the 3D websites / stores I visit were giving away a free laser-scanned male head model recently so I took the opportunity to grab it and put it through my rendering software. Whilst it doesn't look especially complex, it came with a significant amount of texture data; for specular, gloss, diffuse colour, bump mapping, subsurface scattering, epidermal scattering, Fresnel reflection, diffuse roughness and a few others. All in this head model alone weighed in around the 1GB mark...

16378626207_719e661210_o.jpg
Male Head Test Render
by SharkyUK, on Flickr

And a zombie... why not?! :)

16412811676_ef28613e54_o.jpg
Zombie Head Test Render
by SharkyUK, on Flickr

The images that the system produces are basically 'clamped' to a specific (limited) colour range so that they can be displayed on a monitor (effectively resulting in an LDR image - Low Dynamic Range). However, internally my system represents the image in a HDR format (high dynamic range) which means I can represent a large spread of colour and light intensities across the spectrum. Ultimately though, for display on screen, this has to be mapped to a lower dynamic range image for display purposes. Through scene post-processing, post compositing and tone mapping it is possible to portray a high dynamic range of lighting on a monitor; by using glows (overbright 'bloom' areas) and similar techniques. I experimented with these techniques recently and here (below) are some images showing the results. In this case the method is relatively simple; render the scene then determine which parts of the scene are above a certain luminance / brightness. Those bright areas are stored in a buffer for processing later. Once the scene has been analysed a Gaussian blur is applied (the kernel size and radius is determined as a function of the source brightness) on that buffer of bright areas. This blur is performed a number of times until the desired effect is achieved. Once complete, the 'bright' buffer is additively blended with the original scene image and this effectively produces the 'bloom' (glow) effect. Finally, the scene is tone mapped to allow the final image to have a specific look (e.g. filmic) and into a range of colour intensities that can be reproduced on a display device. The first image shows the sequence (sort of) directly in my renderer...

16563919212_dddab52682_o.jpg
simplehdrprocess
by SharkyUK, on Flickr

And here are some of the test images I've produced.

15944533933_3b8b5708ba_o.jpg
Dragon HDR Test Render
by SharkyUK, on Flickr

15944360813_7815f16f3f_o.jpg
Crazy Sailor Test Render
by SharkyUK, on Flickr

16538543916_fd76cd38d3_o.jpg
Pistons Test Render
by SharkyUK, on Flickr

16377091730_77bedddb28_o.jpg
Hotdog Samurai Test Render
by SharkyUK, on Flickr

16564559915_7767d276b3_o.jpg
Quadbot Test Render
by SharkyUK, on Flickr

16378625777_4b19528693_o.jpg
Male Head Test Render HDR
by SharkyUK, on Flickr

15944360293_9d4d2423dd_o.jpg
HDR Test Render
by SharkyUK, on Flickr

16562885911_042dc8d2b3_o.jpg
HDR Test Render
by SharkyUK, on Flickr

I want to look closer at image-based lighting and HDRI imaging next.
 

sn00p

ClioSport Club Member
  A blue one.
Can I just saw....

EWW MFC!!!!

Barring that, awesome stuff as ever. One day when I get some time to spare I might try my hand at a basic ray tracer, nothing that fancy though.

I remember making my peers marvel when I wrote code to draw a gouraud shaded polygon (in pascal, in DOS back in 1991?).
 


Top