Monday, 21 August 2017

Spatially biased sampling with revector fields: Proof of concept

tldr: A uv coordinate transform field can transform texture sampling into something akin to a 2d quad mesh. This allows for much better representations of 2d vector graphics when represented as textures.

Sampling

Optimal sampling of raw spatial data does not necessarily consist of uniformly sampled points. While sampling at twice the lowest frequency (Shannon) results in detection of a given frequency the  representation of this data is quite poor (specifically perceptually).


The two figures below compare 1d sin function with a uniform vs spatially biased sampling.
Red lines indicate linearly interpolated sample points. Green bars indicate uniform sampling locations.






The figure below shows a 1d sin( x^4) function that exhibits geometric structure.



Non-uniform sampling is favorable when either:
1. Geometric structural regularity of data is non uniform
2. Optimal Sampling phase does not match uniform phase

Experiment

I decided the best way to demonstrate biased sampling was with 2d sampling aka textures. All sampling of these textures was using bilinear filtering. The two criteria mentioned above apply to cartoon images which is what we consider below.


.

Here we have a 256x256 image and its least squared error 64x64 approximation. We introduce a biasing vector field texture (x,y) to transform uniform samples to biased samples.

In pseudo hlsl

Normal sampling:
rgb = texture2d(downsampled64.png,  uv.xy);

Biased revector sampling:
vec2 biasedUV =  texture2d( biasVectorField64 , uv.xy ).xy;
rgb = texture2d( rebiasedDownsampled64 , biasedUV );

I did not have time to develop an algorithm to find both the forward biased vector field and the complementary rebiasedDownsample. So I used stochastic sampling to find both these this field and sampling data. Below is a comparison to the original normal bilinear filtered image to a biased sampled image/field.




Obviously this single forward bias field adds an extra texture lookup and the memory and bandwidth of sampling this field texture. (such sampling is coherent however)



Radical differences become even more obvious in with magnified portions of these images. The biased vector field has spared no expense to in its attempts at representing the original image with spatially distributed bilinear filtering.

It should be noted that the final results here are only a proof of concept. The results were better than I expected especially considering the stochastic method for generating the solutions.

References


Cartoon image: https://en.wikipedia.org/wiki/Goku
Similar work: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf



Additional trials

Here are some more examples. (With animated stochastic error reduction process)



Penguin Logo


Basic text.



DBZ goku.





Monday, 17 July 2017

Fair Random Number Generation for Trustless Peers

This is a obvious technique and merely an application of one way cryptographic functions.


Suppose players in a peer network wish to participate in a game of chance. The primary object is high quality random binary numbers. However in a network without server authority, no single peer can be trusted to create these random values.


The first solution that comes to mind is to have each peer submit a random number and to xor these values together to form the final random value.  This would be a valid procedure only if the data was guaranteed to be in flight along the wire.

Rf =   R1 ^ R2 ^ R3 ^ ...  Ri =  ⦻ R

However the nature of modern networks prevents us from practically knowing if the data is really on the wire. This is due to to the unknown real symmetry of the underlying network latency. (network latency can always be artificially increased and made asymmetric)

What we need is a method of commitment to a chosen random value prior to revealing its value. Cryptographic functions can perform this task.


Method

First we gather salt for the hashes by requesting a client Nonce for each peer.
🖧 ⟲  N1 , NNN4 ... Ni          sharing a single nonce value from each peer to each other peer 


Now each peer generates a random number Ri  and build a signature value via a cryptographic one way function 𝓕.

Si = 𝓕(  Ri ,   N1 , NNN4 ... Ni )       signature hash is created with random R in combination with  all nonce values  


🖧 ⟲ Si              
    share signature hash values from all peers to all peers


Once all signatures have been shared the original random values can now be shared and the final random value can be computed.

🖧 ⟲ Ri                                                          share original random R from each peer to each other peer
Rf =   R1 ^ R2 ^ R3 ^ ...  Ri =  ⦻ Ri         final random value is computed by xor-ing all peer R values

Committed values of Ri can now be verified against Si by all peers.

Discussion

  • Suggestion for one way function 𝓕 is SHA3.  It is current computationally infeasible to find collisions and this makes it virtually impossible to determine R from S or to change R for a given S.
  • Xor-ing a random from each peers ensures that if at most one peer actually uses a random value then the final value will contain appropriate entropy.


Example

Peers p1 and p2
N1 = 0x12345678
N2 = 0x9ABCDEF0
N shared.

R1 = 0x11111111    generated by peer 1
R2 = 0x22222222  generated by peer 2

S1 = sha1( R1 , N1 , N2 ) = b4853b44cbb378db14608716c1dd415536b40cf5

S1 = sha1( R2 , N1 , N2 ) = f29c5b7cedf5bd8f5eaded280c3b2586a4071aa0
S shared.

R shared.

Rf 0x11111111 ^ 0x22222222  =    0x33333333
Si  can be verified.