more fft musings
I was kinda getting nowhere with the fft code but then I saw something about doing large fft's on the parallella and it gave me some ideas to keep poking.
I did some experimenting with the memory access patterns (and towards the end had a dejavu moment - i did most of this last year too, ...) and a radix-8 implementation and started thinking about breaking a big problem onto the epiphany. I'm still hand-expanding the expressions using the basic radix-2 algorithm which is not optimal I believe but the more closer I get to grokking it the easier it will be to understand the more sophisticated implementations.
I was thinking of trying to use the mesh network as a sort of bandwidth multiplier, but the LDS is already one so i'll leave those experiments till later. It should be possible to do something like a 16K element fft on the board quite efficiently but one has to work out the addressing logic first.
So I took the first-stage of my algorithm which performs the bit-reversal permute and a radix-N transform (decimation in time) and converted it to a chunked version. The data transfer works out quite nicely - a single regular sparse DMA can load in the input data which can then be 'recursively' transformed in-place up to the available LDS size - 1K elements allows for double buffering. This can then be written out as a contiguous block ready for subsequent passes. I did some preliminary work on the subsequent passes too and I think I can do them in a similar way - and do multiple passes on-core before dumping it out. The trick will be to process the data in a way that writes can be grouped together at least some of the time.
So I think the data is doable in a fairly efficient way and the LDS can be used as a bandwidth multiplier. And if the LDS is effectively always doing the same-sized fft some other code efficiencies may come into play.
I haven't looked at the coefficient tables (aka 'twiddle' factors) yet - although i think there should be some regularities there as well.
Time passes ...
I had a closer look today after starting to write this this morning (it's now evening). The coefficients for the radix-4 algorithm turn out to be quite simple the way i'm stepping through everything and I only need to load 2 values for the 4 coefficients required for a radix-4 step. I then use them as a, a, b, ib (where i = sqrt(-1)). And they are evenly spaced across the table so they can be loaded using 2x dma - i should have enough memory to double-buffer the loads (hmm, or maybe just fit them all in for a given operation since the smaller stages need fewer coefficients).
I also started looking at how to break the stages into smaller parts. I broke a 256 element fft into two phases: first phase which includes the reordering which does 2x radix-4 passes for an equivalent of radix-16. It has to do 16x of these. I can then complete the job by doing another 16x 2x radix-4 passes using data spaced 16 elements apart. In this way each 'fft processor' only requires 16 elements of storage and the memory only has to be read and written twice in total. Scaling this up to what I can fit on the epiphany ... I could do a 1M element fft by performing radix-1024 stages on each core. I would have to do 2K of these.
Normally this would be a pretty inefficient memory access pattern with L1 cache unless you could load the data synchronously across the cores (if only the dma engines had 32-bit modulos) but given the memory interface on the epiphany and the lack of cache and burst mode reads I don't think it matters so much. It still affects writes but if that's a problem there are ways around it.
I haven't yet tried any of this on the epiphany though. On my workstation I did compare it to FFT Packaqe and although it's not as fast it's in the same city and i haven't fully broken it up into the way it needs to fit onto the epiphany yet. I'm just writing the expressions out in full using complex float and letting the compiler optimise all the sub-expressions and so forth.
Speaking of the compiler I tried some really basic ops to see how it did things, a bit odd but probably because it's sitting in a function:
complex float foo1(complex float a, complex float b, complex float w) { return a + b * w * I; } complex float foo0(complex float a, complex float b, complex float w) { return a + b * w; }
I was just confirming that * I got converted into swapping (re,im) to (-im,re).
Compiler does this:
1 .file "e-boo.c" 2 .section .text 3 .balign 4 4 .global _foo1 5 _foo1: 6 0000 4C150044 ldr r16,[sp,#2] 7 0004 EF840220 mov ip,r1 8 0008 CC350044 ldr r17,[sp,#3] 9 000c 2F2C0701 fmul r1,r3,r16 10 0010 3F880721 fmadd ip,r2,r16 11 0014 BF280701 fmadd r1,r2,r17 12 0018 CF8C0721 fmsub ip,r3,r17 13 001c 9740 fsub r2,r0,r1 14 001e EF300204 mov r1,ip 15 0022 E208 mov r0,r2 16 0024 4F190204 rts 17 .size _foo1, .-_foo1 18 .balign 4 19 .global _foo0 20 _foo0: 21 0028 4C150044 ldr r16,[sp,#2] 22 002c CC350044 ldr r17,[sp,#3] 23 0030 2F8C0721 fmul ip,r3,r16 24 0034 3F080701 fmadd r0,r2,r16 25 0038 BF880721 fmadd ip,r2,r17 26 003c EF000240 mov r16,r0 27 0040 CF0C0741 fmsub r16,r3,r17 28 0044 8F900724 fadd ip,ip,r1 29 0048 EF000208 mov r0,r16 30 004c EF300204 mov r1,ip 31 0050 4F190204 rts 32 .size _foo0, .-_foo0
It's doing the I * thing properly which is expected. But the way the result is formed isn't ideal. I think the delay slots added would be:
ldr r16,[sp,#2] ; w.r ldr r17,[sp,#3] ; w.i fmul ip,r3,r16 ; b.i * w.r fmadd r0,r2,r16 ; a.r += b.r * w.r ;; ;; ;; fmadd ip,r2,r17 ; b.i * a.r + b.r * w.i mov r16,r0 ; dual issue fmsub r16,r3,r17 ; a.r -= b.i * w.i ;; ;; ;; fadd ip,ip,r1 ; a.i += (b.i * a.r + b.r * w.i) mov r0,r16 ;; ;; ;; mov r1,ip
Just using fma operations instead of a separate mul and add removes the need for an extra step, needs fewer instructions, and leaves the result in the right register anyway:
;; a.r += b.r * w.r - b.i * w.i; ;; a.i += b.i * w.r + b.r * w.i; ldr r16,[sp,#2] ; w.r ldr r17,[sp,#3] ; w.i fmadd r0,r2,r16 ; a.r += b.r * w.r fmadd r1,r2,r17 ; a.i += b.r * w.i ;; ;; ;; fmsub r0,r3,r17 ; a.r -= b.i * w.i fmadd r1,r3,r16 ; a.i += b.i * w.r
This isn't really a big issue because normally there is more work going on and more opportunity to schedule things to fill the gaps, but I have noticed it isn't using the fma stuff as much as it might.
On a related note (not directly something i've seen out of the compiler) this is the basic step in an fft calculation:
complex float b0 = a0 + w0 * a1; complex float b1 = a0 - w0 * a1;
One would normally expect a common sub-expression optimisation to (correctly, well perhaps: floats are very problematic in general and standards compliance might dictate a certain operational order - i'm using -ffast-math which ignores these details) turn this into:
complex float w0a1 = w0 * a1; complex float b0 = a0 + w0a1; complex float b1 = a0 - w0a1;
And "on paper" it looks like an improvement: 1 complex multiply and 2 additions compared to 2 and 2; particularly since a complex multiple is 4x multiples and 2x additions itself.
But yeah, it isn't; actually it's worse ... The first can be represented by 8x multiplies and 8x additions in 10 instructions:
mov b1r,a0r ; dual issues fmadd a0r,w0r,a1r mov b1i,a0i ; dual issues fmadd a0i,w0r,a1i fmsub b1r,w0r,a1r fmsub b1i,w0r,a1i ;; fmsub a0r,w0i,a1i fmadd a0i,w0i,a1r fmadd b1r,w0i,a1i fmsub b1i,w0i,a1r
The second can be represented as 4 multiplies and 6 additions in 8 instructions. Better!? No? It adds another delay stage requirement in the pipeline and has less work to do in each wasting further execution slots.
fmul w0a1r,w0r,a1r fmul w0a1i,w0r,a1i ;; ;; ;; fmsub w0a1r,w0i,a1i fmadd w0a1i,w0i,a1r ;; ;; ;; fsub b1r,a0r,w0a1r fadd a0r,a0r,w0a1r fsub b1i,a0i,w0a1i fadd a0i,a0i,w0a1i
Assuming the dual issue ialu ops, the first takes 9 cycles and the second 14.
Again, further work to do or loop-unrolling will hide the delay slots, but it just shows you can't go on pure flops as a performance indicator.
On a slightly unrelated note I also had a quick look into extended precision arithmetic and a few other little bits and pieces. I found the paper ``Extended-Precision Floating-Point Numbers for GPU Computation'' by Andrew Thall that provides an extended precision floating point format using 2x 32-bit floating point numbers. I didn't get around to fully exploring it but it may be of use on the epiphany for certain applications since software doubles are not fast.