Network Rendering - GI & region rendering (?) issues

Started by ahoerl, January 14, 2016, 10:26:51 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

guest84672

Just tried this. Can't reproduce it. I am running a slightly later version though.

APankow

#16
Also, here is the script that I wrote to handle this task.

class keyshot_multi_cam_tt_render() :

def __init__( self, render_directory, width = 1280, height = 1024, max_samples = 120 ) :
self.w = width
self.h = height
self.s = max_samples
self.render_dir = render_directory
self.cam_pattern = 'TT_CAM_'
self.cams = lux.getCameras()

self.save_path = lux.getSceneInfo()['file']
self.ro = lux.getRenderOptions()
self.ro.setMaxSamplesRendering( self.s )
self.ro.setSendToNetwork(True)

def limit_cameras( self, camera_id_list ) :
for cam in [x for x in self.all_cams if x.startswith( self.cam_pattern ) ] :
id = int( camera[ len( self.cam_pattern ):-1 ].lstrip("0") )
if id not in camera_id_list :
self.cams.remove( cam )

def send_to_queue( self ) :
for camera in [x for x in lux.getCameras() if x.startswith( self.cam_pattern ) ] :
lux.setCamera( camera )
render_id = str( camera[ len( self.cam_pattern ):-1 ].lstrip("0") )
for x in range( 4 - len(render_id) ) :
render_id = "0"+render_id
render_file = self.render_dir + "image."+render_id+".jpg"
if lux.renderImage( render_file, self.w, self.h, self.ro ) :
continue


Torus = keyshot_multi_cam_tt_render( "F:/turntable_test/", 1280, 1024, 120 )
# cameras_to_rerender = [ 748, 746, 455, 76, 75, 74, 73, 72, 71, 70, 64, 62, 58, 55, 48, 47, 45, 41, 38, 34, 32, 30, 26, 24, 18, 16, 14, 10, 8, 5, 1 ]
# Torus.limit_cameras( cameras_to_rerender )
Torus.send_to_queue()



*edit* but still getting strange banding from buffer leakage.

guest84672

So the happens then with just one of the cameras, or on all cameras?

APankow

This occurs randomly. I lose about 25% of my Queue to this but in no predictable pattern. Most of the time, it happens to every-other frame once the error begins to rear its face. However, some of the time, it takes down 10 in a row and then goes back to normal without my interaction.

APankow

#19
I have reinstalled all the software and services, rebooted all machines, moved the Slave and Master resource directories to/from the OS drives and data drives, and even tried every output format. The Network Renderer (6.2.105) is destroying an average of 41% of my queue and requiring me to micromanage an automated process. I have been running close to deadlines and even had to turn down projects .

What can we do to fix this, asap?

Niko Planke

Hey

@furmano
May i ask did you  transfer the used bip file as a ksp file at some point in your work flow?
Also your KeyShot  does not use the same version as Network rendering. You should always use the exact same version for Network-Rendering as for your KeyShot to ensure best compatibility. (On case of older versions of KeyShot use the latest release, these are,  5.3.6,  4.3.18, 3.3.33, 2.3.2)

@APankow
May i ask, why are you not using KeyShot VR for this kind of rendering it should be able to achieve the same result with less trouble?

As far as i can see on the script you use, there is a possibility that your Network rendering master is running  low on disk-space due to the high amount of queued tasks(over 7000?). Do you have the possibility to  monitor the  disk space on the Network rendering master?

Did you try to  divide the renderings into smaller chunks(submitting only a few jobs at a time?)



APankow

#21
@Niko Planke

The VR was doing the same thing. This was our workaround so that we could continuously add the failed renders back to the queue. We are also using pretty high end servers (dual Xeon E5s, 256GB RAM each) with huge disk spaces (8TB each), we're not running out of room.

florian80333

Hello,

could somebody from KeyShot give an update to this issue please?

We are facing the problem very frequently which brings us close to switching our 50+ Slaves-Network to a competitors solution.

regards

Florian

Niko Planke

Hello Floarian,

We just released version 6.3 which should address multiple issues related to Network-Rendering.
Please make sure to update you setup.

Since multiple issues have been discussed in this topic,
Can you specify which of the issues discussed in this topic you experience?
Fell free to post example images to help us identify the issue you experience.

If you are experiencing banding with the Interiormode on still images (the initial issue "ahoerl" experienced),
Please note that the Interior-mode will not work optimal on Network-renderings of Still images for lower sample counts.
In that case it is recommended to disable the Interiormode for network rendering.

APankow

Florian,

We were almost to that point, as well. We had to circumvent our company network to resolve this problem. Make sure all of your dedicated render nodes are segregated away from machines that could be on the network but are physically distant or remote. The signal delay seemed to be causing our issue.

Hope this helps.