c# - HLSL Computation - process pixels in order? -


imagine want to, say, compute first 1 million terms of fibonacci sequence using gpu. (i realize exceed precision limit of 32-bit data type - used example)

given gpu 40 shaders/stream processors, , cheating using reference book, can break million terms 40 blocks of 250,000 strips, , seed each shader 2 start values:

unit 0: 1,1 (which calculates 2,3,5,8,blah blah blah)

unit 1: 250,000th term

unit 2: 500,000th term

...

how, if possible, go ensuring pixels processed in order? if first few pixels in input texture have values (with rgba simplicity)

0,0,0,1 // initial condition 0,0,0,1 // initial condition 0,0,0,2 0,0,0,3 0,0,0,5 ... 

how can ensure don't try calculate 5th term before first 4 ready?

i realize done in multiple passes setting "ready" bit whenever value calculated, seems incredibly inefficient , sort of eliminates benefit of performing type of calculation on gpu.

opencl/cuda/etc provide nice ways this, i'm trying (for own edification) work xna/hlsl.

links or examples appreciated.

update/simplification

is possible write shader uses values 1 pixel influence values neighboring pixel?

you cannot determine order pixels processed. if could, break massive pixel throughput of shader pipelines. can calculating fibonacci sequence using non-recursive formula.

in question, trying serialize shader units run 1 after another. can use cpu right away , faster.

by way, multiple passes aren't slow might think, won't in case. cannot calculate next value without knowing previous ones, killing parallelization.


Comments

Popular posts from this blog

Add email recipient to all new Trac tickets -

asp.net - repeatedly call AddImageUrl(url) to assemble pdf document -

java - Android recognize cell phone with keyboard or not? -